Jan 31 08:56:20 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 08:56:20 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 08:56:20 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 08:56:20 localhost kernel: BIOS-provided physical RAM map:
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 08:56:20 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 08:56:20 localhost kernel: NX (Execute Disable) protection: active
Jan 31 08:56:20 localhost kernel: APIC: Static calls initialized
Jan 31 08:56:20 localhost kernel: SMBIOS 2.8 present.
Jan 31 08:56:20 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 08:56:20 localhost kernel: Hypervisor detected: KVM
Jan 31 08:56:20 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 08:56:20 localhost kernel: kvm-clock: using sched offset of 8075484389 cycles
Jan 31 08:56:20 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 08:56:20 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 31 08:56:20 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 08:56:20 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 08:56:20 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 08:56:20 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 08:56:20 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 08:56:20 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 08:56:20 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 08:56:20 localhost kernel: Using GB pages for direct mapping
Jan 31 08:56:20 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 08:56:20 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 08:56:20 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 08:56:20 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 08:56:20 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 08:56:20 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 08:56:20 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 08:56:20 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 08:56:20 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 08:56:20 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 08:56:20 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 08:56:20 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 08:56:20 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 08:56:20 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 08:56:20 localhost kernel: No NUMA configuration found
Jan 31 08:56:20 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 08:56:20 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 31 08:56:20 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 08:56:20 localhost kernel: Zone ranges:
Jan 31 08:56:20 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 08:56:20 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 08:56:20 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 08:56:20 localhost kernel:   Device   empty
Jan 31 08:56:20 localhost kernel: Movable zone start for each node
Jan 31 08:56:20 localhost kernel: Early memory node ranges
Jan 31 08:56:20 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 08:56:20 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 08:56:20 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 08:56:20 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 08:56:20 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 08:56:20 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 08:56:20 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 08:56:20 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 08:56:20 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 08:56:20 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 08:56:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 08:56:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 08:56:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 08:56:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 08:56:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 08:56:20 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 08:56:20 localhost kernel: TSC deadline timer available
Jan 31 08:56:20 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 08:56:20 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 08:56:20 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 08:56:20 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 08:56:20 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 08:56:20 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 08:56:20 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 08:56:20 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 08:56:20 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 08:56:20 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 08:56:20 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 08:56:20 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 08:56:20 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 08:56:20 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 08:56:20 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 08:56:20 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 08:56:20 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 08:56:20 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 08:56:20 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 08:56:20 localhost kernel: random: crng init done
Jan 31 08:56:20 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 08:56:20 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 08:56:20 localhost kernel: Fallback order for Node 0: 0 
Jan 31 08:56:20 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 08:56:20 localhost kernel: Policy zone: Normal
Jan 31 08:56:20 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 08:56:20 localhost kernel: software IO TLB: area num 8.
Jan 31 08:56:20 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 08:56:20 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 08:56:20 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 08:56:20 localhost kernel: Dynamic Preempt: voluntary
Jan 31 08:56:20 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 08:56:20 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 08:56:20 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 08:56:20 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 08:56:20 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 08:56:20 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 08:56:20 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 08:56:20 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 08:56:20 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 08:56:20 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 08:56:20 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 08:56:20 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 08:56:20 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 08:56:20 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 08:56:20 localhost kernel: Console: colour VGA+ 80x25
Jan 31 08:56:20 localhost kernel: printk: console [ttyS0] enabled
Jan 31 08:56:20 localhost kernel: ACPI: Core revision 20230331
Jan 31 08:56:20 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 08:56:20 localhost kernel: x2apic enabled
Jan 31 08:56:20 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 08:56:20 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 08:56:20 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 31 08:56:20 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 08:56:20 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 08:56:20 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 08:56:20 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 08:56:20 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 08:56:20 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 08:56:20 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 08:56:20 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 08:56:20 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 08:56:20 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 08:56:20 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 08:56:20 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 08:56:20 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 08:56:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 08:56:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 08:56:20 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 08:56:20 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 08:56:20 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 08:56:20 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 08:56:20 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 08:56:20 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 08:56:20 localhost kernel: landlock: Up and running.
Jan 31 08:56:20 localhost kernel: Yama: becoming mindful.
Jan 31 08:56:20 localhost kernel: SELinux:  Initializing.
Jan 31 08:56:20 localhost kernel: LSM support for eBPF active
Jan 31 08:56:20 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 08:56:20 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 08:56:20 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 08:56:20 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 08:56:20 localhost kernel: ... version:                0
Jan 31 08:56:20 localhost kernel: ... bit width:              48
Jan 31 08:56:20 localhost kernel: ... generic registers:      6
Jan 31 08:56:20 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 08:56:20 localhost kernel: ... max period:             00007fffffffffff
Jan 31 08:56:20 localhost kernel: ... fixed-purpose events:   0
Jan 31 08:56:20 localhost kernel: ... event mask:             000000000000003f
Jan 31 08:56:20 localhost kernel: signal: max sigframe size: 1776
Jan 31 08:56:20 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 08:56:20 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 08:56:20 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 08:56:20 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 08:56:20 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 08:56:20 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 08:56:20 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 31 08:56:20 localhost kernel: node 0 deferred pages initialised in 20ms
Jan 31 08:56:20 localhost kernel: Memory: 7763776K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Jan 31 08:56:20 localhost kernel: devtmpfs: initialized
Jan 31 08:56:20 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 08:56:20 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 08:56:20 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 08:56:20 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 08:56:20 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 08:56:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 08:56:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 08:56:20 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 08:56:20 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 08:56:20 localhost kernel: audit: type=2000 audit(1769849778.368:1): state=initialized audit_enabled=0 res=1
Jan 31 08:56:20 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 08:56:20 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 08:56:20 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 08:56:20 localhost kernel: cpuidle: using governor menu
Jan 31 08:56:20 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 08:56:20 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 08:56:20 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 08:56:20 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 08:56:20 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 08:56:20 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 08:56:20 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 08:56:20 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 08:56:20 localhost kernel: Demotion targets for Node 0: null
Jan 31 08:56:20 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 08:56:20 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 08:56:20 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 08:56:20 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 08:56:20 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 08:56:20 localhost kernel: ACPI: Interpreter enabled
Jan 31 08:56:20 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 08:56:20 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 08:56:20 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 08:56:20 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 08:56:20 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 08:56:20 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 08:56:20 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [3] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [4] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [5] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [6] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [7] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [8] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [9] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [10] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [11] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [12] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [13] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [14] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [15] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [16] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [17] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [18] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [19] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [20] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [21] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [22] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [23] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [24] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [25] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [26] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [27] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [28] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [29] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [30] registered
Jan 31 08:56:20 localhost kernel: acpiphp: Slot [31] registered
Jan 31 08:56:20 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 08:56:20 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 08:56:20 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 08:56:20 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 08:56:20 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 08:56:20 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 08:56:20 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 08:56:20 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 08:56:20 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 08:56:20 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 08:56:20 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 08:56:20 localhost kernel: iommu: Default domain type: Translated
Jan 31 08:56:20 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 08:56:20 localhost kernel: SCSI subsystem initialized
Jan 31 08:56:20 localhost kernel: ACPI: bus type USB registered
Jan 31 08:56:20 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 08:56:20 localhost kernel: usbcore: registered new interface driver hub
Jan 31 08:56:20 localhost kernel: usbcore: registered new device driver usb
Jan 31 08:56:20 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 08:56:20 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 08:56:20 localhost kernel: PTP clock support registered
Jan 31 08:56:20 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 08:56:20 localhost kernel: NetLabel: Initializing
Jan 31 08:56:20 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 08:56:20 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 08:56:20 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 08:56:20 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 08:56:20 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 08:56:20 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 08:56:20 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 08:56:20 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 08:56:20 localhost kernel: vgaarb: loaded
Jan 31 08:56:20 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 08:56:20 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 08:56:20 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 08:56:20 localhost kernel: pnp: PnP ACPI init
Jan 31 08:56:20 localhost kernel: pnp 00:03: [dma 2]
Jan 31 08:56:20 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 08:56:20 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 08:56:20 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 08:56:20 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 08:56:20 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 08:56:20 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 08:56:20 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 08:56:20 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 08:56:20 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 08:56:20 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 08:56:20 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 08:56:20 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 08:56:20 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 08:56:20 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 08:56:20 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 08:56:20 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 08:56:20 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 08:56:20 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 24773 usecs
Jan 31 08:56:20 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 08:56:20 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 08:56:20 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 08:56:20 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 08:56:20 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 08:56:20 localhost kernel: Initialise system trusted keyrings
Jan 31 08:56:20 localhost kernel: Key type blacklist registered
Jan 31 08:56:20 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 08:56:20 localhost kernel: zbud: loaded
Jan 31 08:56:20 localhost kernel: integrity: Platform Keyring initialized
Jan 31 08:56:20 localhost kernel: integrity: Machine keyring initialized
Jan 31 08:56:20 localhost kernel: Freeing initrd memory: 88000K
Jan 31 08:56:20 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 08:56:20 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 08:56:20 localhost kernel: Key type asymmetric registered
Jan 31 08:56:20 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 08:56:20 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 08:56:20 localhost kernel: io scheduler mq-deadline registered
Jan 31 08:56:20 localhost kernel: io scheduler kyber registered
Jan 31 08:56:20 localhost kernel: io scheduler bfq registered
Jan 31 08:56:20 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 08:56:20 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 08:56:20 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 08:56:20 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 08:56:20 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 08:56:20 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 08:56:20 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 08:56:20 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 08:56:20 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 08:56:20 localhost kernel: Non-volatile memory driver v1.3
Jan 31 08:56:20 localhost kernel: rdac: device handler registered
Jan 31 08:56:20 localhost kernel: hp_sw: device handler registered
Jan 31 08:56:20 localhost kernel: emc: device handler registered
Jan 31 08:56:20 localhost kernel: alua: device handler registered
Jan 31 08:56:20 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 08:56:20 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 08:56:20 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 08:56:20 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 08:56:20 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 08:56:20 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 08:56:20 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 08:56:20 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 08:56:20 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 08:56:20 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 08:56:20 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 08:56:20 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 08:56:20 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 08:56:20 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 08:56:20 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 08:56:20 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 08:56:20 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 08:56:20 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 08:56:20 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 08:56:20 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 08:56:20 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T08:56:19 UTC (1769849779)
Jan 31 08:56:20 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 08:56:20 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 08:56:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 08:56:20 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 08:56:20 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 08:56:20 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 08:56:20 localhost kernel: usbhid: USB HID core driver
Jan 31 08:56:20 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 08:56:20 localhost kernel: Initializing XFRM netlink socket
Jan 31 08:56:20 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 08:56:20 localhost kernel: Segment Routing with IPv6
Jan 31 08:56:20 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 08:56:20 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 08:56:20 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 08:56:20 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 08:56:20 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 08:56:20 localhost kernel: sched_clock: Marking stable (1083010690, 164976740)->(1363489160, -115501730)
Jan 31 08:56:20 localhost kernel: registered taskstats version 1
Jan 31 08:56:20 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 08:56:20 localhost kernel: Demotion targets for Node 0: null
Jan 31 08:56:20 localhost kernel: page_owner is disabled
Jan 31 08:56:20 localhost kernel: Key type .fscrypt registered
Jan 31 08:56:20 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 08:56:20 localhost kernel: Key type big_key registered
Jan 31 08:56:20 localhost kernel: Key type encrypted registered
Jan 31 08:56:20 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 08:56:20 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 08:56:20 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 08:56:20 localhost kernel: ima: No architecture policies found
Jan 31 08:56:20 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 08:56:20 localhost kernel: evm: security.selinux
Jan 31 08:56:20 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 08:56:20 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 08:56:20 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 08:56:20 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 08:56:20 localhost kernel: evm: security.apparmor (disabled)
Jan 31 08:56:20 localhost kernel: evm: security.ima
Jan 31 08:56:20 localhost kernel: evm: security.capability
Jan 31 08:56:20 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 08:56:20 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 08:56:20 localhost kernel: Running certificate verification RSA selftest
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 08:56:20 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 08:56:20 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 08:56:20 localhost kernel: clk: Disabling unused clocks
Jan 31 08:56:20 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 08:56:20 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 08:56:20 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 08:56:20 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 08:56:20 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 08:56:20 localhost kernel: Run /init as init process
Jan 31 08:56:20 localhost kernel:   with arguments:
Jan 31 08:56:20 localhost kernel:     /init
Jan 31 08:56:20 localhost kernel:   with environment:
Jan 31 08:56:20 localhost kernel:     HOME=/
Jan 31 08:56:20 localhost kernel:     TERM=linux
Jan 31 08:56:20 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 08:56:20 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 08:56:20 localhost systemd[1]: Detected virtualization kvm.
Jan 31 08:56:20 localhost systemd[1]: Detected architecture x86-64.
Jan 31 08:56:20 localhost systemd[1]: Running in initrd.
Jan 31 08:56:20 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 08:56:20 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 08:56:20 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 08:56:20 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 08:56:20 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 08:56:20 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 08:56:20 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 08:56:20 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 08:56:20 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 08:56:20 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 08:56:20 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 08:56:20 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 08:56:20 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 08:56:20 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 08:56:20 localhost systemd[1]: Reached target Local File Systems.
Jan 31 08:56:20 localhost systemd[1]: Reached target Path Units.
Jan 31 08:56:20 localhost systemd[1]: Reached target Slice Units.
Jan 31 08:56:20 localhost systemd[1]: Reached target Swaps.
Jan 31 08:56:20 localhost systemd[1]: Reached target Timer Units.
Jan 31 08:56:20 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 08:56:20 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 08:56:20 localhost systemd[1]: Listening on Journal Socket.
Jan 31 08:56:20 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 08:56:20 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 08:56:20 localhost systemd[1]: Reached target Socket Units.
Jan 31 08:56:20 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 08:56:20 localhost systemd[1]: Starting Journal Service...
Jan 31 08:56:20 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 08:56:20 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 08:56:20 localhost systemd[1]: Starting Create System Users...
Jan 31 08:56:20 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 08:56:20 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 08:56:20 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 08:56:20 localhost systemd[1]: Finished Create System Users.
Jan 31 08:56:20 localhost systemd-journald[309]: Journal started
Jan 31 08:56:20 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/9990fbaef679470b991813eed4e2ece1) is 8.0M, max 153.6M, 145.6M free.
Jan 31 08:56:20 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 31 08:56:20 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 31 08:56:20 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 08:56:20 localhost systemd[1]: Started Journal Service.
Jan 31 08:56:20 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 08:56:20 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 08:56:20 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 08:56:20 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 08:56:20 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 08:56:20 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 08:56:20 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 08:56:20 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 08:56:20 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 08:56:20 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 08:56:20 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 08:56:20 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 08:56:20 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 08:56:20 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 08:56:20 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 08:56:20 localhost kernel: RPC: Registered udp transport module.
Jan 31 08:56:20 localhost kernel: RPC: Registered tcp transport module.
Jan 31 08:56:20 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 08:56:20 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 08:56:20 localhost rpc.statd[446]: Version 2.5.4 starting
Jan 31 08:56:20 localhost rpc.statd[446]: Initializing NSM state
Jan 31 08:56:20 localhost rpc.idmapd[451]: Setting log level to 0
Jan 31 08:56:20 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 08:56:20 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 08:56:20 localhost systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 08:56:20 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 08:56:20 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 08:56:20 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 08:56:20 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 08:56:20 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 08:56:20 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 08:56:20 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 08:56:20 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 08:56:20 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 08:56:20 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 08:56:20 localhost systemd[1]: Reached target Network.
Jan 31 08:56:20 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 08:56:20 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 08:56:20 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 08:56:20 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 08:56:20 localhost systemd-udevd[508]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:56:20 localhost kernel:  vda: vda1
Jan 31 08:56:20 localhost kernel: libata version 3.00 loaded.
Jan 31 08:56:20 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 08:56:20 localhost kernel: scsi host0: ata_piix
Jan 31 08:56:20 localhost kernel: scsi host1: ata_piix
Jan 31 08:56:20 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 08:56:20 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 08:56:20 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 08:56:21 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 08:56:21 localhost kernel: ata1: found unknown device (class 0)
Jan 31 08:56:21 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 08:56:21 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 08:56:21 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 08:56:21 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 08:56:21 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 08:56:21 localhost systemd[1]: Reached target System Initialization.
Jan 31 08:56:21 localhost systemd[1]: Reached target Basic System.
Jan 31 08:56:21 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 08:56:21 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 08:56:21 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 08:56:21 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 08:56:21 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 08:56:21 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 08:56:21 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 08:56:21 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 08:56:21 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 08:56:21 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 08:56:21 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 08:56:21 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 08:56:21 localhost systemd[1]: Mounting /sysroot...
Jan 31 08:56:21 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 08:56:21 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 08:56:21 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 08:56:21 localhost systemd[1]: Mounted /sysroot.
Jan 31 08:56:21 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 08:56:21 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 08:56:21 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 08:56:21 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 08:56:21 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 08:56:21 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 08:56:21 localhost systemd[1]: Starting dracut mount hook...
Jan 31 08:56:21 localhost systemd[1]: Finished dracut mount hook.
Jan 31 08:56:21 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 08:56:21 localhost rpc.idmapd[451]: exiting on signal 15
Jan 31 08:56:21 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 08:56:21 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 08:56:21 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 08:56:22 localhost systemd[1]: Stopped target Network.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Timer Units.
Jan 31 08:56:22 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 08:56:22 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Basic System.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Path Units.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Slice Units.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Socket Units.
Jan 31 08:56:22 localhost systemd[1]: Stopped target System Initialization.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Swaps.
Jan 31 08:56:22 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 08:56:22 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 08:56:22 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 08:56:22 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 08:56:22 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 08:56:22 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 08:56:22 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 08:56:22 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 08:56:22 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 08:56:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 08:56:22 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 08:56:22 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 08:56:22 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Closed udev Control Socket.
Jan 31 08:56:22 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 08:56:22 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 08:56:22 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 08:56:22 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 08:56:22 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 08:56:22 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 08:56:22 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Stopped Create System Users.
Jan 31 08:56:22 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 08:56:22 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 08:56:22 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 08:56:22 localhost systemd[1]: Reached target Switch Root.
Jan 31 08:56:22 localhost systemd[1]: Starting Switch Root...
Jan 31 08:56:22 localhost systemd[1]: Switching root.
Jan 31 08:56:22 localhost systemd-journald[309]: Journal stopped
Jan 31 08:56:24 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Jan 31 08:56:24 localhost kernel: audit: type=1404 audit(1769849782.567:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:56:24 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:56:24 localhost kernel: audit: type=1403 audit(1769849782.810:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 08:56:24 localhost systemd[1]: Successfully loaded SELinux policy in 251.713ms.
Jan 31 08:56:24 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 47.999ms.
Jan 31 08:56:24 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 08:56:24 localhost systemd[1]: Detected virtualization kvm.
Jan 31 08:56:24 localhost systemd[1]: Detected architecture x86-64.
Jan 31 08:56:24 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:56:24 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Stopped Switch Root.
Jan 31 08:56:24 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 08:56:24 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 08:56:24 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 08:56:24 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 08:56:24 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 08:56:24 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 08:56:24 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 08:56:24 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 08:56:24 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 08:56:24 localhost systemd[1]: Stopped target Switch Root.
Jan 31 08:56:24 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 08:56:24 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 08:56:24 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 08:56:24 localhost systemd[1]: Reached target Path Units.
Jan 31 08:56:24 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 08:56:24 localhost systemd[1]: Reached target Slice Units.
Jan 31 08:56:24 localhost systemd[1]: Reached target Swaps.
Jan 31 08:56:24 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 08:56:24 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 08:56:24 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 08:56:24 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 08:56:24 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 08:56:24 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 08:56:24 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 08:56:24 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 08:56:24 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 08:56:24 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 08:56:24 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 08:56:24 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 08:56:24 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 08:56:24 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 08:56:24 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 08:56:24 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 08:56:24 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 08:56:24 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 08:56:24 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 08:56:24 localhost systemd[1]: Stopped Journal Service.
Jan 31 08:56:24 localhost kernel: fuse: init (API version 7.37)
Jan 31 08:56:24 localhost systemd[1]: Starting Journal Service...
Jan 31 08:56:24 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 08:56:24 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 08:56:24 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 08:56:24 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 08:56:24 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 08:56:24 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 08:56:24 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 08:56:24 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 08:56:24 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 08:56:24 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 08:56:24 localhost systemd-journald[676]: Journal started
Jan 31 08:56:24 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 08:56:24 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 08:56:24 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Started Journal Service.
Jan 31 08:56:24 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 08:56:24 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 08:56:24 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 08:56:24 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 08:56:24 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 08:56:24 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 08:56:24 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 08:56:24 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 08:56:24 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 08:56:24 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 08:56:24 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 08:56:24 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 08:56:24 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 08:56:24 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:24 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 08:56:24 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 08:56:24 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 08:56:24 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 08:56:24 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 08:56:24 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 08:56:24 localhost systemd[1]: Starting Create System Users...
Jan 31 08:56:24 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 08:56:24 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 08:56:24 localhost systemd-journald[676]: Received client request to flush runtime journal.
Jan 31 08:56:24 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 08:56:24 localhost systemd[1]: Finished Create System Users.
Jan 31 08:56:24 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 08:56:24 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 08:56:24 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 08:56:24 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 08:56:24 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 08:56:24 localhost systemd[1]: Reached target Local File Systems.
Jan 31 08:56:24 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 08:56:24 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 08:56:24 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 08:56:24 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 08:56:24 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 08:56:24 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 08:56:24 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 08:56:24 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 31 08:56:24 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 08:56:24 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 08:56:24 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 08:56:24 localhost systemd[1]: Starting RPC Bind...
Jan 31 08:56:24 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 08:56:24 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 08:56:24 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 08:56:24 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 08:56:25 localhost systemd[1]: Started RPC Bind.
Jan 31 08:56:25 localhost augenrules[706]: /sbin/augenrules: No change
Jan 31 08:56:25 localhost augenrules[721]: No rules
Jan 31 08:56:25 localhost augenrules[721]: enabled 1
Jan 31 08:56:25 localhost augenrules[721]: failure 1
Jan 31 08:56:25 localhost augenrules[721]: pid 701
Jan 31 08:56:25 localhost augenrules[721]: rate_limit 0
Jan 31 08:56:25 localhost augenrules[721]: backlog_limit 8192
Jan 31 08:56:25 localhost augenrules[721]: lost 0
Jan 31 08:56:25 localhost augenrules[721]: backlog 4
Jan 31 08:56:25 localhost augenrules[721]: backlog_wait_time 60000
Jan 31 08:56:25 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 31 08:56:25 localhost augenrules[721]: enabled 1
Jan 31 08:56:25 localhost augenrules[721]: failure 1
Jan 31 08:56:25 localhost augenrules[721]: pid 701
Jan 31 08:56:25 localhost augenrules[721]: rate_limit 0
Jan 31 08:56:25 localhost augenrules[721]: backlog_limit 8192
Jan 31 08:56:25 localhost augenrules[721]: lost 0
Jan 31 08:56:25 localhost augenrules[721]: backlog 4
Jan 31 08:56:25 localhost augenrules[721]: backlog_wait_time 60000
Jan 31 08:56:25 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 31 08:56:25 localhost augenrules[721]: enabled 1
Jan 31 08:56:25 localhost augenrules[721]: failure 1
Jan 31 08:56:25 localhost augenrules[721]: pid 701
Jan 31 08:56:25 localhost augenrules[721]: rate_limit 0
Jan 31 08:56:25 localhost augenrules[721]: backlog_limit 8192
Jan 31 08:56:25 localhost augenrules[721]: lost 0
Jan 31 08:56:25 localhost augenrules[721]: backlog 8
Jan 31 08:56:25 localhost augenrules[721]: backlog_wait_time 60000
Jan 31 08:56:25 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 31 08:56:25 localhost systemd[1]: Started Security Auditing Service.
Jan 31 08:56:25 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 08:56:25 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 08:56:25 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 08:56:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 08:56:25 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 08:56:25 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 08:56:25 localhost systemd[1]: Starting Update is Completed...
Jan 31 08:56:25 localhost systemd[1]: Finished Update is Completed.
Jan 31 08:56:25 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 08:56:25 localhost systemd[1]: Reached target System Initialization.
Jan 31 08:56:25 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 08:56:25 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 08:56:25 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 08:56:25 localhost systemd[1]: Reached target Timer Units.
Jan 31 08:56:25 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 08:56:25 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 08:56:25 localhost systemd[1]: Reached target Socket Units.
Jan 31 08:56:25 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 08:56:25 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 08:56:25 localhost systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:56:25 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 08:56:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 08:56:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 08:56:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 08:56:25 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 08:56:25 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 08:56:25 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 08:56:25 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 08:56:25 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 08:56:25 localhost systemd[1]: Reached target Basic System.
Jan 31 08:56:25 localhost dbus-broker-lau[763]: Ready
Jan 31 08:56:25 localhost systemd[1]: Starting NTP client/server...
Jan 31 08:56:25 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 08:56:25 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 08:56:25 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 08:56:25 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 08:56:25 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 08:56:25 localhost systemd[1]: Started irqbalance daemon.
Jan 31 08:56:25 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 08:56:25 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 08:56:25 localhost kernel: [drm] features: -context_init
Jan 31 08:56:25 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 08:56:25 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:56:25 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:56:25 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:56:25 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 08:56:25 localhost kernel: [drm] number of scanouts: 1
Jan 31 08:56:25 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 08:56:25 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 08:56:25 localhost kernel: [drm] number of cap sets: 0
Jan 31 08:56:25 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 08:56:25 localhost systemd[1]: Starting User Login Management...
Jan 31 08:56:25 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 08:56:25 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 08:56:25 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 08:56:25 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 08:56:25 localhost chronyd[800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 08:56:25 localhost chronyd[800]: Loaded 0 symmetric keys
Jan 31 08:56:25 localhost chronyd[800]: Using right/UTC timezone to obtain leap second data
Jan 31 08:56:25 localhost chronyd[800]: Loaded seccomp filter (level 2)
Jan 31 08:56:25 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 08:56:25 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 08:56:25 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 08:56:25 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 08:56:25 localhost systemd[1]: Started NTP client/server.
Jan 31 08:56:26 localhost systemd-logind[795]: New seat seat0.
Jan 31 08:56:26 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 08:56:26 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 08:56:26 localhost systemd[1]: Started User Login Management.
Jan 31 08:56:26 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 08:56:26 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 08:56:26 localhost iptables.init[788]: iptables: Applying firewall rules: [  OK  ]
Jan 31 08:56:26 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 08:56:26 localhost cloud-init[838]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 08:56:26 +0000. Up 8.22 seconds.
Jan 31 08:56:26 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 08:56:26 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 08:56:26 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpwflemxyb.mount: Deactivated successfully.
Jan 31 08:56:27 localhost systemd[1]: Starting Hostname Service...
Jan 31 08:56:27 localhost systemd[1]: Started Hostname Service.
Jan 31 08:56:27 np0005603742.novalocal systemd-hostnamed[852]: Hostname set to <np0005603742.novalocal> (static)
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Starting Network Manager...
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.8651] NetworkManager (version 1.54.3-2.el9) is starting... (boot:6687f1f7-7543-41e0-ab3e-fbff009a01ab)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.8656] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9056] manager[0x5563d6825000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9123] hostname: hostname: using hostnamed
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9124] hostname: static hostname changed from (none) to "np0005603742.novalocal"
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9132] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9286] manager[0x5563d6825000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9287] manager[0x5563d6825000]: rfkill: WWAN hardware radio set enabled
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9524] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9525] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9525] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9526] manager: Networking is enabled by state file
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9529] settings: Loaded settings plugin: keyfile (internal)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9589] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9612] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9627] dhcp: init: Using DHCP client 'internal'
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9633] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9647] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9659] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9696] device (lo): Activation: starting connection 'lo' (b49ad265-99ce-4f7a-8dd0-0523eb06b6be)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9702] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9704] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Started Network Manager.
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9744] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9754] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9757] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9758] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9763] device (eth0): carrier: link connected
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9765] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9770] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9775] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9778] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9779] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Reached target Network.
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9781] manager: NetworkManager state is now CONNECTING
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9783] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9788] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9790] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9846] dhcp4 (eth0): state changed new lease, address=38.102.83.89
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9851] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 08:56:27 np0005603742.novalocal NetworkManager[856]: <info>  [1769849787.9865] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 08:56:27 np0005603742.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0123] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0128] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0132] device (lo): Activation: successful, device activated.
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0136] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0138] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0141] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0143] device (eth0): Activation: successful, device activated.
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0146] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 08:56:28 np0005603742.novalocal NetworkManager[856]: <info>  [1769849788.0149] manager: startup complete
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: Reached target NFS client services.
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 08:56:28 np0005603742.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 08:56:28 +0000. Up 9.84 seconds.
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.89         | 255.255.255.0 | global | fa:16:3e:3f:47:d3 |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe3f:47d3/64 |       .       |  link  | fa:16:3e:3f:47:d3 |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 08:56:28 np0005603742.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 08:56:30 np0005603742.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Jan 31 08:56:30 np0005603742.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 08:56:30 np0005603742.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Jan 31 08:56:30 np0005603742.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Jan 31 08:56:30 np0005603742.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Jan 31 08:56:30 np0005603742.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Generating public/private rsa key pair.
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: SHA256:IlyoWvTeUyU3Jrb+KmQipJ32hh8Wr3vowikPnzIK/OI root@np0005603742.novalocal
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: +---[RSA 3072]----+
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |                 |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |     .           |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |  . . . + =      |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: | ..+ . . B .     |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: | +o.= . S        |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |oo=..=o+         |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |++ =+=+ .        |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |=+*o+oo. .       |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |+E=*=o ....      |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: SHA256:VFsZc9CunpuAok8O5EKq5nlCDhmkQ5l02R7ZKCc8LxU root@np0005603742.novalocal
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: +---[ECDSA 256]---+
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |..+.oE=   . ==.  |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: | =.* B . . o.o.  |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |+   O . . .  .   |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |+  . o .      .  |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: | +. o   S    .   |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |o+ o     .  .    |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |= . o o . .. .   |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |.+.o = .   .o.   |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |+oo ..o     o.   |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: SHA256:LtHpi0+Jm7Nw2XRuk4Z/xREtic8kQRI736yOFoDF8EM root@np0005603742.novalocal
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: +--[ED25519 256]--+
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |      .oE oooo o |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |       oo  oo = .|
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |       oo o  = o |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |      ...o o o+  |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |      . S.. ..o. |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |       O =.. .o  |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |    . = B *...   |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |     o.* =.+.    |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: |      =+o.o..    |
Jan 31 08:56:30 np0005603742.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Reached target Network is Online.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting System Logging Service...
Jan 31 08:56:30 np0005603742.novalocal sm-notify[1003]: Version 2.5.4 starting
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 08:56:30 np0005603742.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Jan 31 08:56:30 np0005603742.novalocal sshd[1005]: Server listening on :: port 22.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Started Command Scheduler.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Started Getty on tty1.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 08:56:30 np0005603742.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Jan 31 08:56:30 np0005603742.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 08:56:30 np0005603742.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 35% if used.)
Jan 31 08:56:30 np0005603742.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 08:56:30 np0005603742.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Jan 31 08:56:30 np0005603742.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Started System Logging Service.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 08:56:30 np0005603742.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 08:56:30 np0005603742.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:56:30 np0005603742.novalocal kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Jan 31 08:56:30 np0005603742.novalocal kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1132]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 08:56:31 +0000. Up 12.59 seconds.
Jan 31 08:56:31 np0005603742.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 08:56:31 np0005603742.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 08:56:31 np0005603742.novalocal dracut[1265]: dracut-057-102.git20250818.el9
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1283]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 08:56:31 +0000. Up 12.96 seconds.
Jan 31 08:56:31 np0005603742.novalocal dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1295]: #############################################################
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1299]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1305]: 256 SHA256:VFsZc9CunpuAok8O5EKq5nlCDhmkQ5l02R7ZKCc8LxU root@np0005603742.novalocal (ECDSA)
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1309]: 256 SHA256:LtHpi0+Jm7Nw2XRuk4Z/xREtic8kQRI736yOFoDF8EM root@np0005603742.novalocal (ED25519)
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1311]: 3072 SHA256:IlyoWvTeUyU3Jrb+KmQipJ32hh8Wr3vowikPnzIK/OI root@np0005603742.novalocal (RSA)
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1314]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1316]: #############################################################
Jan 31 08:56:31 np0005603742.novalocal cloud-init[1283]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 08:56:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.19 seconds
Jan 31 08:56:31 np0005603742.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 08:56:31 np0005603742.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1352]: Unable to negotiate with 38.102.83.114 port 39738: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1356]: Unable to negotiate with 38.102.83.114 port 39754: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1361]: Unable to negotiate with 38.102.83.114 port 39760: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1366]: Connection reset by 38.102.83.114 port 39774 [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1350]: Connection closed by 38.102.83.114 port 39734 [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1382]: Unable to negotiate with 38.102.83.114 port 39798: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1354]: Connection closed by 38.102.83.114 port 39748 [preauth]
Jan 31 08:56:31 np0005603742.novalocal sshd-session[1387]: Unable to negotiate with 38.102.83.114 port 39806: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 08:56:32 np0005603742.novalocal sshd-session[1374]: Connection closed by 38.102.83.114 port 39782 [preauth]
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: memstrack is not available
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: memstrack is not available
Jan 31 08:56:32 np0005603742.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 08:56:32 np0005603742.novalocal chronyd[800]: Selected source 199.182.221.110 (2.centos.pool.ntp.org)
Jan 31 08:56:32 np0005603742.novalocal chronyd[800]: System clock TAI offset set to 37 seconds
Jan 31 08:56:33 np0005603742.novalocal dracut[1269]: *** Including module: systemd ***
Jan 31 08:56:33 np0005603742.novalocal dracut[1269]: *** Including module: fips ***
Jan 31 08:56:33 np0005603742.novalocal dracut[1269]: *** Including module: systemd-initrd ***
Jan 31 08:56:33 np0005603742.novalocal dracut[1269]: *** Including module: i18n ***
Jan 31 08:56:33 np0005603742.novalocal dracut[1269]: *** Including module: drm ***
Jan 31 08:56:34 np0005603742.novalocal dracut[1269]: *** Including module: prefixdevname ***
Jan 31 08:56:34 np0005603742.novalocal dracut[1269]: *** Including module: kernel-modules ***
Jan 31 08:56:34 np0005603742.novalocal chronyd[800]: Selected source 54.39.17.239 (2.centos.pool.ntp.org)
Jan 31 08:56:34 np0005603742.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: kernel-modules-extra ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: qemu ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: fstab-sys ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: rootfs-block ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: terminfo ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: udev-rules ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: Skipping udev rule: 91-permissions.rules
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: virtiofs ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: dracut-systemd ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: usrmount ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: base ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: fs-lib ***
Jan 31 08:56:35 np0005603742.novalocal dracut[1269]: *** Including module: kdumpbase ***
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:   microcode_ctl module: mangling fw_dir
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: IRQ 25 affinity is now unmanaged
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: IRQ 33 affinity is now unmanaged
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: IRQ 31 affinity is now unmanaged
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: IRQ 34 affinity is now unmanaged
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: IRQ 32 affinity is now unmanaged
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 08:56:36 np0005603742.novalocal irqbalance[792]: IRQ 29 affinity is now unmanaged
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]: *** Including module: openssl ***
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]: *** Including module: shutdown ***
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]: *** Including module: squash ***
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]: *** Including modules done ***
Jan 31 08:56:36 np0005603742.novalocal dracut[1269]: *** Installing kernel module dependencies ***
Jan 31 08:56:37 np0005603742.novalocal dracut[1269]: *** Installing kernel module dependencies done ***
Jan 31 08:56:37 np0005603742.novalocal dracut[1269]: *** Resolving executable dependencies ***
Jan 31 08:56:38 np0005603742.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 08:56:39 np0005603742.novalocal dracut[1269]: *** Resolving executable dependencies done ***
Jan 31 08:56:39 np0005603742.novalocal dracut[1269]: *** Generating early-microcode cpio image ***
Jan 31 08:56:39 np0005603742.novalocal dracut[1269]: *** Store current command line parameters ***
Jan 31 08:56:39 np0005603742.novalocal dracut[1269]: Stored kernel commandline:
Jan 31 08:56:39 np0005603742.novalocal dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Jan 31 08:56:40 np0005603742.novalocal dracut[1269]: *** Install squash loader ***
Jan 31 08:56:41 np0005603742.novalocal dracut[1269]: *** Squashing the files inside the initramfs ***
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: *** Squashing the files inside the initramfs done ***
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: *** Hardlinking files ***
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Mode:           real
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Files:          50
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Linked:         0 files
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Compared:       0 xattrs
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Compared:       0 files
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Saved:          0 B
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: Duration:       0.000205 seconds
Jan 31 08:56:42 np0005603742.novalocal dracut[1269]: *** Hardlinking files done ***
Jan 31 08:56:44 np0005603742.novalocal dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 08:56:44 np0005603742.novalocal kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Jan 31 08:56:44 np0005603742.novalocal kdumpctl[1014]: kdump: Starting kdump: [OK]
Jan 31 08:56:44 np0005603742.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 08:56:44 np0005603742.novalocal systemd[1]: Startup finished in 1.397s (kernel) + 2.688s (initrd) + 22.413s (userspace) = 26.499s.
Jan 31 08:56:57 np0005603742.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 08:57:05 np0005603742.novalocal sshd-session[4304]: Accepted publickey for zuul from 38.102.83.114 port 56148 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 08:57:05 np0005603742.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 08:57:05 np0005603742.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 08:57:05 np0005603742.novalocal systemd-logind[795]: New session 1 of user zuul.
Jan 31 08:57:05 np0005603742.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 08:57:05 np0005603742.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Queued start job for default target Main User Target.
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Created slice User Application Slice.
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Reached target Paths.
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Reached target Timers.
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Starting D-Bus User Message Bus Socket...
Jan 31 08:57:05 np0005603742.novalocal systemd[4308]: Starting Create User's Volatile Files and Directories...
Jan 31 08:57:06 np0005603742.novalocal systemd[4308]: Finished Create User's Volatile Files and Directories.
Jan 31 08:57:06 np0005603742.novalocal systemd[4308]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:57:06 np0005603742.novalocal systemd[4308]: Reached target Sockets.
Jan 31 08:57:06 np0005603742.novalocal systemd[4308]: Reached target Basic System.
Jan 31 08:57:06 np0005603742.novalocal systemd[4308]: Reached target Main User Target.
Jan 31 08:57:06 np0005603742.novalocal systemd[4308]: Startup finished in 131ms.
Jan 31 08:57:06 np0005603742.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 08:57:06 np0005603742.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 08:57:06 np0005603742.novalocal sshd-session[4304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:57:06 np0005603742.novalocal python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:57:09 np0005603742.novalocal python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:57:11 np0005603742.novalocal sshd-session[4453]: Invalid user sol from 2.57.122.238 port 60584
Jan 31 08:57:11 np0005603742.novalocal sshd-session[4453]: Connection closed by invalid user sol 2.57.122.238 port 60584 [preauth]
Jan 31 08:57:14 np0005603742.novalocal python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:57:16 np0005603742.novalocal python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 08:57:17 np0005603742.novalocal python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjCEF0c8rpid3TgLHvsEYu2MrlyQYsvaJbnklVlvWYizEf0iYhKg3xLfKbwPoFrd9HemQD7HbhXksmnIzRABzB09mYNpGaecHyULIXIeTn2Oj6eHwGEOx/y0vhs4wGJTzjH+Tyt2lV0txEJcbxrw3QpHrgArAKA+XIGZwEEa3xYI8BmQj68uFbIgKtGdLeQtwEfw4qDEMXptN11mg0Nw+GL0fix9ZiaAR0VFiYmCnQUOxzUiWu+yaU14YCS8Hr2A/iW3Jpb7RHJ8qdo8Dswq61SyFOng8UrfZqj+iGt3CjsbWLcLyFsSBxr2B2vkwrVUm32ja/o0R4dwkXZV/+CPWoXTMJACzx8fkuS8mJlelXC6Nrs+JDrsV5ptuSlk9vysPvLwPuupTULYDSkkUkdNTYF+bG1lMSlzUEza/WpHrUQLa3HqPHHovIaBg9GrD/SWUQ6Mv4l2s3bZ0IKDQ8gREabJCml5kbYKDY/uihtS0BRlVM1gO4xQ692OnrPpSeX38= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:18 np0005603742.novalocal python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:18 np0005603742.novalocal python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:19 np0005603742.novalocal python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769849838.401209-207-255502037550552/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=8e0067b921874ac78e54fdb0b791b60d_id_rsa follow=False checksum=e016dd1ed18d94d0a2094e63d1a3ccb450645f6c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:19 np0005603742.novalocal python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:19 np0005603742.novalocal python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769849839.3090532-240-221170904495133/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=8e0067b921874ac78e54fdb0b791b60d_id_rsa.pub follow=False checksum=aba621f2dd47b89cdf940e09863c74da62eacf99 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:21 np0005603742.novalocal python3[4980]: ansible-ping Invoked with data=pong
Jan 31 08:57:22 np0005603742.novalocal python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:57:23 np0005603742.novalocal python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 08:57:25 np0005603742.novalocal python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:25 np0005603742.novalocal python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:25 np0005603742.novalocal python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:26 np0005603742.novalocal python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:26 np0005603742.novalocal python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:26 np0005603742.novalocal python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:27 np0005603742.novalocal sudo[5238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibdjqnxfkdoqeplfjijgggtuidpxaunh ; /usr/bin/python3'
Jan 31 08:57:27 np0005603742.novalocal sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:28 np0005603742.novalocal python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:28 np0005603742.novalocal sudo[5238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:28 np0005603742.novalocal sudo[5316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqghqgechrpkofudiuxysjjpzbfefroc ; /usr/bin/python3'
Jan 31 08:57:28 np0005603742.novalocal sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:28 np0005603742.novalocal python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:28 np0005603742.novalocal sudo[5316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:28 np0005603742.novalocal sudo[5389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadytyqlyvcnrajrcbrkojlxogonzhkj ; /usr/bin/python3'
Jan 31 08:57:28 np0005603742.novalocal sudo[5389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:29 np0005603742.novalocal python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769849848.1944509-21-85433231247822/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:29 np0005603742.novalocal sudo[5389]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:29 np0005603742.novalocal python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:29 np0005603742.novalocal python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:30 np0005603742.novalocal python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:30 np0005603742.novalocal python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:30 np0005603742.novalocal python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:30 np0005603742.novalocal python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:31 np0005603742.novalocal python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:31 np0005603742.novalocal python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:31 np0005603742.novalocal python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:32 np0005603742.novalocal python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:32 np0005603742.novalocal python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:32 np0005603742.novalocal python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:32 np0005603742.novalocal python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:33 np0005603742.novalocal python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:33 np0005603742.novalocal python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:33 np0005603742.novalocal python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:34 np0005603742.novalocal python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:34 np0005603742.novalocal python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:34 np0005603742.novalocal python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:34 np0005603742.novalocal python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:35 np0005603742.novalocal python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:35 np0005603742.novalocal python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:35 np0005603742.novalocal python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:35 np0005603742.novalocal python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:36 np0005603742.novalocal python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:36 np0005603742.novalocal python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 08:57:39 np0005603742.novalocal sudo[6063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsohnugvvyemeezxrqnkugvmymhmwfez ; /usr/bin/python3'
Jan 31 08:57:39 np0005603742.novalocal sudo[6063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:39 np0005603742.novalocal python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 08:57:39 np0005603742.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 08:57:39 np0005603742.novalocal systemd[1]: Started Time & Date Service.
Jan 31 08:57:39 np0005603742.novalocal systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 31 08:57:39 np0005603742.novalocal sudo[6063]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:39 np0005603742.novalocal sudo[6094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndkctppkvwizvxasgxuktdxjgsopayim ; /usr/bin/python3'
Jan 31 08:57:39 np0005603742.novalocal sudo[6094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:40 np0005603742.novalocal python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:40 np0005603742.novalocal sudo[6094]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:40 np0005603742.novalocal python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:40 np0005603742.novalocal python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769849860.2351713-153-184618006780136/source _original_basename=tmpeckzt5cu follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:41 np0005603742.novalocal python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:41 np0005603742.novalocal python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769849860.9618793-183-83937114656585/source _original_basename=tmpkqg7l91e follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:42 np0005603742.novalocal sudo[6514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enwpwbcsmvbburxyzgeuhfcalwmrwrjp ; /usr/bin/python3'
Jan 31 08:57:42 np0005603742.novalocal sudo[6514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:42 np0005603742.novalocal python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:42 np0005603742.novalocal sudo[6514]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:42 np0005603742.novalocal sudo[6587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyamoitorgjbbnhfwvfzzxmjynoboyeh ; /usr/bin/python3'
Jan 31 08:57:42 np0005603742.novalocal sudo[6587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:42 np0005603742.novalocal python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769849862.0184727-231-121666180088374/source _original_basename=tmpo1q93syu follow=False checksum=a0ec3a319fdc158f50412f3ecece5a9469130624 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:42 np0005603742.novalocal sudo[6587]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:43 np0005603742.novalocal python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:57:43 np0005603742.novalocal python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:57:43 np0005603742.novalocal sudo[6741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owanecixdvdknlkrfdpkqwoxensuiymv ; /usr/bin/python3'
Jan 31 08:57:43 np0005603742.novalocal sudo[6741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:43 np0005603742.novalocal python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:57:43 np0005603742.novalocal sudo[6741]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:43 np0005603742.novalocal sudo[6814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbmpepnkwijvbumwonhadnyzquxyzmhe ; /usr/bin/python3'
Jan 31 08:57:43 np0005603742.novalocal sudo[6814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:44 np0005603742.novalocal python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769849863.5495558-273-135805706594587/source _original_basename=tmp0t6jqqdi follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:57:44 np0005603742.novalocal sudo[6814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:44 np0005603742.novalocal sudo[6865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnbjjiwupdwgucfdykhajuoebqusjwgm ; /usr/bin/python3'
Jan 31 08:57:44 np0005603742.novalocal sudo[6865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:57:44 np0005603742.novalocal python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-fc6d-59ff-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:57:44 np0005603742.novalocal sudo[6865]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:45 np0005603742.novalocal python3[6895]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-fc6d-59ff-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 08:57:46 np0005603742.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:58:05 np0005603742.novalocal sudo[6947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpfriddvcxskgbhzsyilietfjnrskki ; /usr/bin/python3'
Jan 31 08:58:05 np0005603742.novalocal sudo[6947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:58:05 np0005603742.novalocal python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:58:05 np0005603742.novalocal sudo[6947]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:09 np0005603742.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 08:58:40 np0005603742.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 08:58:40 np0005603742.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.6828] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 08:58:40 np0005603742.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7006] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7055] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7059] device (eth1): carrier: link connected
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7061] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7068] policy: auto-activating connection 'Wired connection 1' (2a9b79e5-8400-3fb4-adea-7ac6975b5e74)
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7073] device (eth1): Activation: starting connection 'Wired connection 1' (2a9b79e5-8400-3fb4-adea-7ac6975b5e74)
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7074] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7077] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7081] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:58:40 np0005603742.novalocal NetworkManager[856]: <info>  [1769849920.7085] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:58:41 np0005603742.novalocal python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-3fa5-2fa1-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:58:48 np0005603742.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidfyfzeoccninlqkesooequliewiytq ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 08:58:48 np0005603742.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:58:48 np0005603742.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:58:48 np0005603742.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:48 np0005603742.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eripqvthdguezqrbbolkfrzvzhycbmzb ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 08:58:48 np0005603742.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:58:48 np0005603742.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769849928.2506495-102-170938484535833/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2d5d0cd04bf7c35b2544dd4f8ff179ef411b618e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:58:48 np0005603742.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:49 np0005603742.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scldpojuxfrqzhnpbefzkmjnqdmzmrhz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 08:58:49 np0005603742.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:58:49 np0005603742.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6133] caught SIGTERM, shutting down normally.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Stopping Network Manager...
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6142] dhcp4 (eth0): canceled DHCP transaction
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6142] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6142] dhcp4 (eth0): state changed no lease
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6145] manager: NetworkManager state is now CONNECTING
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6273] dhcp4 (eth1): canceled DHCP transaction
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6273] dhcp4 (eth1): state changed no lease
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[856]: <info>  [1769849929.6351] exiting (success)
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Stopped Network Manager.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: NetworkManager.service: Consumed 1.317s CPU time, 10.0M memory peak.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Starting Network Manager...
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.6771] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:6687f1f7-7543-41e0-ab3e-fbff009a01ab)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.6774] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.6811] manager[0x55eff0703000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Starting Hostname Service...
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Started Hostname Service.
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7434] hostname: hostname: using hostnamed
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7436] hostname: static hostname changed from (none) to "np0005603742.novalocal"
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7440] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7443] manager[0x55eff0703000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7443] manager[0x55eff0703000]: rfkill: WWAN hardware radio set enabled
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7467] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7467] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7467] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7468] manager: Networking is enabled by state file
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7471] settings: Loaded settings plugin: keyfile (internal)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7474] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7496] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7504] dhcp: init: Using DHCP client 'internal'
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7506] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7511] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7517] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7523] device (lo): Activation: starting connection 'lo' (b49ad265-99ce-4f7a-8dd0-0523eb06b6be)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7527] device (eth0): carrier: link connected
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7530] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7534] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7534] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7539] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7544] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7549] device (eth1): carrier: link connected
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7552] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7557] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2a9b79e5-8400-3fb4-adea-7ac6975b5e74) (indicated)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7557] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7561] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7567] device (eth1): Activation: starting connection 'Wired connection 1' (2a9b79e5-8400-3fb4-adea-7ac6975b5e74)
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Started Network Manager.
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7572] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7575] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7578] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7579] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7581] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7584] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7586] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7589] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7591] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7596] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7598] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7603] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7605] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7626] dhcp4 (eth0): state changed new lease, address=38.102.83.89
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.7630] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 08:58:49 np0005603742.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 08:58:49 np0005603742.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8049] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8062] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8065] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8074] device (lo): Activation: successful, device activated.
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8106] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8110] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8116] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8121] device (eth0): Activation: successful, device activated.
Jan 31 08:58:49 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849929.8129] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 08:58:50 np0005603742.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-3fa5-2fa1-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:58:59 np0005603742.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 08:59:19 np0005603742.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 08:59:21 np0005603742.novalocal systemd[4308]: Starting Mark boot as successful...
Jan 31 08:59:21 np0005603742.novalocal systemd[4308]: Finished Mark boot as successful.
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4521] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 08:59:35 np0005603742.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 08:59:35 np0005603742.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4741] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4744] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4750] device (eth1): Activation: successful, device activated.
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4756] manager: startup complete
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4758] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <warn>  [1769849975.4762] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4769] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4946] dhcp4 (eth1): canceled DHCP transaction
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4947] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4947] dhcp4 (eth1): state changed no lease
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4965] policy: auto-activating connection 'ci-private-network' (93575fce-2dad-5737-84b4-529cbf1b9632)
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4971] device (eth1): Activation: starting connection 'ci-private-network' (93575fce-2dad-5737-84b4-529cbf1b9632)
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4972] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4975] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4983] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.4992] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.7118] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.7122] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:59:35 np0005603742.novalocal NetworkManager[7191]: <info>  [1769849975.7128] device (eth1): Activation: successful, device activated.
Jan 31 08:59:45 np0005603742.novalocal sudo[7370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcczdrpremrqelwmzjcqqsynuxrnlklo ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 08:59:45 np0005603742.novalocal sudo[7370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:59:45 np0005603742.novalocal python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:59:45 np0005603742.novalocal sudo[7370]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:45 np0005603742.novalocal sudo[7443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yosgiqsajnpybivjeuqnyzwidibrjwib ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 08:59:45 np0005603742.novalocal sudo[7443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:59:45 np0005603742.novalocal python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769849984.9329817-259-5402191618212/source _original_basename=tmpptnv9o4j follow=False checksum=4c70946538b046da75c4d9a974fe8513d0c7bc71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:59:45 np0005603742.novalocal sudo[7443]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:45 np0005603742.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 09:00:27 np0005603742.novalocal sshd-session[7470]: Invalid user sol from 2.57.122.238 port 49260
Jan 31 09:00:27 np0005603742.novalocal sshd-session[7470]: Connection closed by invalid user sol 2.57.122.238 port 49260 [preauth]
Jan 31 09:00:45 np0005603742.novalocal sshd-session[4317]: Received disconnect from 38.102.83.114 port 56148:11: disconnected by user
Jan 31 09:00:45 np0005603742.novalocal sshd-session[4317]: Disconnected from user zuul 38.102.83.114 port 56148
Jan 31 09:00:45 np0005603742.novalocal sshd-session[4304]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:00:45 np0005603742.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Jan 31 09:01:01 np0005603742.novalocal CROND[7473]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 09:01:01 np0005603742.novalocal run-parts[7476]: (/etc/cron.hourly) starting 0anacron
Jan 31 09:01:01 np0005603742.novalocal anacron[7484]: Anacron started on 2026-01-31
Jan 31 09:01:01 np0005603742.novalocal anacron[7484]: Will run job `cron.daily' in 11 min.
Jan 31 09:01:01 np0005603742.novalocal anacron[7484]: Will run job `cron.weekly' in 31 min.
Jan 31 09:01:01 np0005603742.novalocal anacron[7484]: Will run job `cron.monthly' in 51 min.
Jan 31 09:01:01 np0005603742.novalocal anacron[7484]: Jobs will be executed sequentially
Jan 31 09:01:01 np0005603742.novalocal run-parts[7486]: (/etc/cron.hourly) finished 0anacron
Jan 31 09:01:01 np0005603742.novalocal CROND[7472]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 09:02:21 np0005603742.novalocal systemd[4308]: Created slice User Background Tasks Slice.
Jan 31 09:02:21 np0005603742.novalocal systemd[4308]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 09:02:21 np0005603742.novalocal systemd[4308]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 09:03:53 np0005603742.novalocal sshd-session[7489]: Invalid user sol from 2.57.122.238 port 51078
Jan 31 09:03:53 np0005603742.novalocal sshd-session[7489]: Connection closed by invalid user sol 2.57.122.238 port 51078 [preauth]
Jan 31 09:07:27 np0005603742.novalocal sshd-session[7494]: Accepted publickey for zuul from 38.102.83.114 port 53988 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 09:07:27 np0005603742.novalocal systemd-logind[795]: New session 3 of user zuul.
Jan 31 09:07:27 np0005603742.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 09:07:27 np0005603742.novalocal sshd-session[7494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:07:27 np0005603742.novalocal sudo[7521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqamdbaapmxukdanuerjhmhwfoowhlcv ; /usr/bin/python3'
Jan 31 09:07:27 np0005603742.novalocal sudo[7521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:27 np0005603742.novalocal python3[7523]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-a506-beef-000000002185-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:07:27 np0005603742.novalocal sudo[7521]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:28 np0005603742.novalocal sudo[7549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrrhlbgkxjgwusfnrhlyccntzcrjyaug ; /usr/bin/python3'
Jan 31 09:07:28 np0005603742.novalocal sudo[7549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:28 np0005603742.novalocal python3[7551]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:07:28 np0005603742.novalocal sudo[7549]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:28 np0005603742.novalocal sudo[7575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quxakxzdpewxbeqxgvcavmvmxakszdlu ; /usr/bin/python3'
Jan 31 09:07:28 np0005603742.novalocal sudo[7575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:28 np0005603742.novalocal python3[7577]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:07:28 np0005603742.novalocal sudo[7575]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:28 np0005603742.novalocal sudo[7602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdmtowkzphodcnfjdjodprzmdbnpzkbk ; /usr/bin/python3'
Jan 31 09:07:28 np0005603742.novalocal sudo[7602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:28 np0005603742.novalocal python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:07:28 np0005603742.novalocal sudo[7602]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:28 np0005603742.novalocal sudo[7628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maehbbjicvidjvpbsttennlspyftymnl ; /usr/bin/python3'
Jan 31 09:07:28 np0005603742.novalocal sudo[7628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:28 np0005603742.novalocal python3[7630]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:07:28 np0005603742.novalocal sudo[7628]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:29 np0005603742.novalocal sudo[7654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alloaauxavizxfygyouqocwdxhicfclm ; /usr/bin/python3'
Jan 31 09:07:29 np0005603742.novalocal sudo[7654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:29 np0005603742.novalocal python3[7656]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:07:29 np0005603742.novalocal sudo[7654]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:29 np0005603742.novalocal sudo[7732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaajmunigqxsgeijxpqacwudnwumnpmz ; /usr/bin/python3'
Jan 31 09:07:29 np0005603742.novalocal sudo[7732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:29 np0005603742.novalocal python3[7734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:07:29 np0005603742.novalocal sudo[7732]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:30 np0005603742.novalocal sudo[7805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zokmjpbuoepodluzzxzudortcydzixht ; /usr/bin/python3'
Jan 31 09:07:30 np0005603742.novalocal sudo[7805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:30 np0005603742.novalocal python3[7807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769850449.7550306-513-9965610606353/source _original_basename=tmpkqey3juc follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:07:30 np0005603742.novalocal sudo[7805]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:31 np0005603742.novalocal sudo[7855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhusclpssjdgtfpctqvhwnkqpgvpqrnn ; /usr/bin/python3'
Jan 31 09:07:31 np0005603742.novalocal sudo[7855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:31 np0005603742.novalocal python3[7857]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:07:31 np0005603742.novalocal systemd[1]: Reloading.
Jan 31 09:07:31 np0005603742.novalocal systemd-rc-local-generator[7874]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:07:31 np0005603742.novalocal sudo[7855]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:32 np0005603742.novalocal sudo[7910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okyiohkzahtctmqbiypufssagptprlqj ; /usr/bin/python3'
Jan 31 09:07:32 np0005603742.novalocal sudo[7910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:33 np0005603742.novalocal python3[7912]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 09:07:33 np0005603742.novalocal sudo[7910]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:33 np0005603742.novalocal sudo[7936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwsvovpsfdchwaewpgpdkddyubptrfi ; /usr/bin/python3'
Jan 31 09:07:33 np0005603742.novalocal sudo[7936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:33 np0005603742.novalocal python3[7938]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:07:33 np0005603742.novalocal sudo[7936]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:33 np0005603742.novalocal sudo[7964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvnvgqkqwahbhwovprmxkukonpoaxoz ; /usr/bin/python3'
Jan 31 09:07:33 np0005603742.novalocal sudo[7964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:33 np0005603742.novalocal python3[7966]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:07:33 np0005603742.novalocal sudo[7964]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:33 np0005603742.novalocal sudo[7992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwapzfosgaaakidwtjruwvnfpyccapg ; /usr/bin/python3'
Jan 31 09:07:33 np0005603742.novalocal sudo[7992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:34 np0005603742.novalocal python3[7994]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:07:34 np0005603742.novalocal sudo[7992]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:34 np0005603742.novalocal sudo[8020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rniatyqazcysmcsnqnpoojxrftvxssoh ; /usr/bin/python3'
Jan 31 09:07:34 np0005603742.novalocal sudo[8020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:34 np0005603742.novalocal python3[8022]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:07:34 np0005603742.novalocal sudo[8020]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:35 np0005603742.novalocal python3[8049]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-a506-beef-00000000218c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:07:35 np0005603742.novalocal python3[8079]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 09:07:37 np0005603742.novalocal sshd-session[7497]: Connection closed by 38.102.83.114 port 53988
Jan 31 09:07:37 np0005603742.novalocal sshd-session[7494]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:07:37 np0005603742.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 09:07:37 np0005603742.novalocal systemd[1]: session-3.scope: Consumed 3.513s CPU time.
Jan 31 09:07:37 np0005603742.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Jan 31 09:07:37 np0005603742.novalocal systemd-logind[795]: Removed session 3.
Jan 31 09:07:39 np0005603742.novalocal sshd-session[8085]: Accepted publickey for zuul from 38.102.83.114 port 48192 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 09:07:39 np0005603742.novalocal systemd-logind[795]: New session 4 of user zuul.
Jan 31 09:07:39 np0005603742.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 09:07:39 np0005603742.novalocal sshd-session[8085]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:07:39 np0005603742.novalocal sudo[8112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqkgmvuwkyeivkdbzpnauijbfnjujkez ; /usr/bin/python3'
Jan 31 09:07:39 np0005603742.novalocal sudo[8112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:07:39 np0005603742.novalocal python3[8114]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 09:07:50 np0005603742.novalocal setsebool[8156]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 09:07:50 np0005603742.novalocal setsebool[8156]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 09:08:01 np0005603742.novalocal sshd-session[8168]: Connection closed by 168.138.202.218 port 58338
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:08:02 np0005603742.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  Converting 389 SID table entries...
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:08:11 np0005603742.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:08:28 np0005603742.novalocal dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 09:08:28 np0005603742.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:08:28 np0005603742.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:08:28 np0005603742.novalocal systemd[1]: Reloading.
Jan 31 09:08:29 np0005603742.novalocal systemd-rc-local-generator[8923]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:08:29 np0005603742.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:08:30 np0005603742.novalocal sudo[8112]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:33 np0005603742.novalocal python3[13646]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-ff7a-21be-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:08:34 np0005603742.novalocal kernel: evm: overlay not supported
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: Starting D-Bus User Message Bus...
Jan 31 09:08:34 np0005603742.novalocal dbus-broker-launch[13989]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 09:08:34 np0005603742.novalocal dbus-broker-launch[13989]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: Started D-Bus User Message Bus.
Jan 31 09:08:34 np0005603742.novalocal dbus-broker-lau[13989]: Ready
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: Created slice Slice /user.
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: podman-13973.scope: unit configures an IP firewall, but not running as root.
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: Started podman-13973.scope.
Jan 31 09:08:34 np0005603742.novalocal systemd[4308]: Started podman-pause-826e9a5c.scope.
Jan 31 09:08:35 np0005603742.novalocal sudo[14467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmyztumjebcujrfjqcwalvukvbwogpc ; /usr/bin/python3'
Jan 31 09:08:35 np0005603742.novalocal sudo[14467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:08:35 np0005603742.novalocal python3[14487]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.106:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.106:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:08:35 np0005603742.novalocal python3[14487]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 09:08:35 np0005603742.novalocal sudo[14467]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:36 np0005603742.novalocal sshd-session[8088]: Connection closed by 38.102.83.114 port 48192
Jan 31 09:08:36 np0005603742.novalocal sshd-session[8085]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:08:36 np0005603742.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 09:08:36 np0005603742.novalocal systemd[1]: session-4.scope: Consumed 40.757s CPU time.
Jan 31 09:08:36 np0005603742.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Jan 31 09:08:36 np0005603742.novalocal systemd-logind[795]: Removed session 4.
Jan 31 09:08:47 np0005603742.novalocal irqbalance[792]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 09:08:47 np0005603742.novalocal irqbalance[792]: IRQ 27 affinity is now unmanaged
Jan 31 09:08:54 np0005603742.novalocal sshd-session[24544]: Connection closed by 38.102.83.5 port 55322 [preauth]
Jan 31 09:08:54 np0005603742.novalocal sshd-session[24546]: Connection closed by 38.102.83.5 port 55316 [preauth]
Jan 31 09:08:54 np0005603742.novalocal sshd-session[24541]: Unable to negotiate with 38.102.83.5 port 55338: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 09:08:54 np0005603742.novalocal sshd-session[24547]: Unable to negotiate with 38.102.83.5 port 55336: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 09:08:54 np0005603742.novalocal sshd-session[24549]: Unable to negotiate with 38.102.83.5 port 55352: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 09:08:57 np0005603742.novalocal sshd-session[26262]: Accepted publickey for zuul from 38.102.83.114 port 35058 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 09:08:57 np0005603742.novalocal systemd-logind[795]: New session 5 of user zuul.
Jan 31 09:08:57 np0005603742.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 09:08:57 np0005603742.novalocal sshd-session[26262]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:08:58 np0005603742.novalocal python3[26368]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOdL7Sf75mE6joaLI+061Eq1nWkbdVxZnQL1oXEYXftwT9vLb4jlQkfCAjJQ4k2TDnFB4d/AM5Eqwr1kyRe1pqU= zuul@np0005603741.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 09:08:58 np0005603742.novalocal sudo[26497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxosliktbkoojalewryqvfaacfsrjytw ; /usr/bin/python3'
Jan 31 09:08:58 np0005603742.novalocal sudo[26497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:08:58 np0005603742.novalocal python3[26506]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOdL7Sf75mE6joaLI+061Eq1nWkbdVxZnQL1oXEYXftwT9vLb4jlQkfCAjJQ4k2TDnFB4d/AM5Eqwr1kyRe1pqU= zuul@np0005603741.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 09:08:58 np0005603742.novalocal sudo[26497]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:59 np0005603742.novalocal sudo[26816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommvrfzlaauvpzwwmdjjcxwpyciqtpyn ; /usr/bin/python3'
Jan 31 09:08:59 np0005603742.novalocal sudo[26816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:08:59 np0005603742.novalocal python3[26825]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603742.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 09:08:59 np0005603742.novalocal useradd[26915]: new group: name=cloud-admin, GID=1002
Jan 31 09:08:59 np0005603742.novalocal useradd[26915]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 09:08:59 np0005603742.novalocal sudo[26816]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:59 np0005603742.novalocal sudo[27045]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqakodcjmrpitetvnocvudkyloumbuhc ; /usr/bin/python3'
Jan 31 09:08:59 np0005603742.novalocal sudo[27045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:08:59 np0005603742.novalocal python3[27054]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOdL7Sf75mE6joaLI+061Eq1nWkbdVxZnQL1oXEYXftwT9vLb4jlQkfCAjJQ4k2TDnFB4d/AM5Eqwr1kyRe1pqU= zuul@np0005603741.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 09:08:59 np0005603742.novalocal sudo[27045]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:59 np0005603742.novalocal sudo[27302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkorizxonvnzjfejdduswsxlgkdnfpzv ; /usr/bin/python3'
Jan 31 09:08:59 np0005603742.novalocal sudo[27302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:09:00 np0005603742.novalocal python3[27311]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:09:00 np0005603742.novalocal sudo[27302]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:00 np0005603742.novalocal sudo[27566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvvfuainawtstbefokuhsewwvubbcktf ; /usr/bin/python3'
Jan 31 09:09:00 np0005603742.novalocal sudo[27566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:09:00 np0005603742.novalocal python3[27572]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769850539.7697682-135-173391219619371/source _original_basename=tmp936q_njh follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:09:00 np0005603742.novalocal sudo[27566]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:01 np0005603742.novalocal sudo[27939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdrbynbsehipapobrlcsfswopsonmfxt ; /usr/bin/python3'
Jan 31 09:09:01 np0005603742.novalocal sudo[27939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:09:01 np0005603742.novalocal python3[27950]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 09:09:01 np0005603742.novalocal systemd[1]: Starting Hostname Service...
Jan 31 09:09:01 np0005603742.novalocal systemd[1]: Started Hostname Service.
Jan 31 09:09:01 np0005603742.novalocal systemd-hostnamed[28067]: Changed pretty hostname to 'compute-0'
Jan 31 09:09:01 compute-0 systemd-hostnamed[28067]: Hostname set to <compute-0> (static)
Jan 31 09:09:01 compute-0 NetworkManager[7191]: <info>  [1769850541.4384] hostname: static hostname changed from "np0005603742.novalocal" to "compute-0"
Jan 31 09:09:01 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 09:09:01 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 09:09:01 compute-0 sudo[27939]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:01 compute-0 sshd-session[26312]: Connection closed by 38.102.83.114 port 35058
Jan 31 09:09:01 compute-0 sshd-session[26262]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:09:01 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 09:09:01 compute-0 systemd[1]: session-5.scope: Consumed 2.149s CPU time.
Jan 31 09:09:01 compute-0 systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Jan 31 09:09:01 compute-0 systemd-logind[795]: Removed session 5.
Jan 31 09:09:05 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:09:05 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:09:05 compute-0 systemd[1]: man-db-cache-update.service: Consumed 41.225s CPU time.
Jan 31 09:09:05 compute-0 systemd[1]: run-ra31a57e089c14dab91b00f94135874a0.service: Deactivated successfully.
Jan 31 09:09:11 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 09:09:31 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 09:11:21 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 09:11:21 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 09:11:21 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 09:11:21 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 09:12:01 compute-0 anacron[7484]: Job `cron.daily' started
Jan 31 09:12:01 compute-0 anacron[7484]: Job `cron.daily' terminated
Jan 31 09:13:54 compute-0 sshd-session[30007]: Accepted publickey for zuul from 38.102.83.5 port 47462 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 09:13:54 compute-0 systemd-logind[795]: New session 6 of user zuul.
Jan 31 09:13:54 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 31 09:13:54 compute-0 sshd-session[30007]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:13:54 compute-0 python3[30083]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:13:56 compute-0 sudo[30197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbhbiqzsiqsordimlqewqwpvkrasoszd ; /usr/bin/python3'
Jan 31 09:13:56 compute-0 sudo[30197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:56 compute-0 python3[30199]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:56 compute-0 sudo[30197]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:56 compute-0 sudo[30270]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfcybksasxosctbhiqusnqggpouuqufg ; /usr/bin/python3'
Jan 31 09:13:56 compute-0 sudo[30270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:56 compute-0 python3[30272]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:56 compute-0 sudo[30270]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:56 compute-0 sudo[30296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saccgryokgwoawdwfafftzpasnnrdbdg ; /usr/bin/python3'
Jan 31 09:13:56 compute-0 sudo[30296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:56 compute-0 python3[30298]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:56 compute-0 sudo[30296]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:56 compute-0 sudo[30369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfrxanjevzzkjuxyffkqcclbyngxzxrz ; /usr/bin/python3'
Jan 31 09:13:56 compute-0 sudo[30369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:57 compute-0 python3[30371]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:57 compute-0 sudo[30369]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:57 compute-0 sudo[30395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddttgzcwchvipyoberujbqpuruknldts ; /usr/bin/python3'
Jan 31 09:13:57 compute-0 sudo[30395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:57 compute-0 python3[30397]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:57 compute-0 sudo[30395]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:57 compute-0 sudo[30468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgelaedroyralqtcmhxftngausutqkhk ; /usr/bin/python3'
Jan 31 09:13:57 compute-0 sudo[30468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:57 compute-0 python3[30470]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:57 compute-0 sudo[30468]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:57 compute-0 sudo[30494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvddqucbcvwjprkdvdkdvrbluloeciis ; /usr/bin/python3'
Jan 31 09:13:57 compute-0 sudo[30494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:57 compute-0 python3[30496]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:57 compute-0 sudo[30494]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:57 compute-0 sudo[30567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyaacdrlnmbkaaluulzaciljvhcbrvpl ; /usr/bin/python3'
Jan 31 09:13:57 compute-0 sudo[30567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:58 compute-0 python3[30569]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:58 compute-0 sudo[30567]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:58 compute-0 sudo[30593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvadfdzuzdqfkkzwzsvxqubhywthskgm ; /usr/bin/python3'
Jan 31 09:13:58 compute-0 sudo[30593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:58 compute-0 python3[30595]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:58 compute-0 sudo[30593]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:58 compute-0 sudo[30666]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vezedhwjxxrwwsrvssynmwhbouhgifro ; /usr/bin/python3'
Jan 31 09:13:58 compute-0 sudo[30666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:58 compute-0 python3[30668]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:58 compute-0 sudo[30666]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:58 compute-0 sudo[30692]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izsetprsruiromssshsfaylzfssozlhu ; /usr/bin/python3'
Jan 31 09:13:58 compute-0 sudo[30692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:58 compute-0 python3[30694]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:58 compute-0 sudo[30692]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:58 compute-0 sudo[30765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohksyednabqtpmzqicfbykyznxqneqyo ; /usr/bin/python3'
Jan 31 09:13:58 compute-0 sudo[30765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:59 compute-0 python3[30767]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:59 compute-0 sudo[30765]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:59 compute-0 sudo[30791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgouyxjoefbfrpvunjqvmxmopkbgzwh ; /usr/bin/python3'
Jan 31 09:13:59 compute-0 sudo[30791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:59 compute-0 python3[30793]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 09:13:59 compute-0 sudo[30791]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:59 compute-0 sudo[30864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsmbgnifpzmjrqqgjcwsqnykpfrhwffe ; /usr/bin/python3'
Jan 31 09:13:59 compute-0 sudo[30864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:13:59 compute-0 python3[30866]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769850835.9098465-33824-209130562255260/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:13:59 compute-0 sudo[30864]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:02 compute-0 sshd-session[30892]: Unable to negotiate with 192.168.122.11 port 44218: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 09:14:02 compute-0 sshd-session[30891]: Unable to negotiate with 192.168.122.11 port 44206: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 09:14:02 compute-0 sshd-session[30893]: Unable to negotiate with 192.168.122.11 port 44220: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 09:14:02 compute-0 sshd-session[30894]: Connection closed by 192.168.122.11 port 44188 [preauth]
Jan 31 09:14:02 compute-0 sshd-session[30896]: Connection closed by 192.168.122.11 port 44192 [preauth]
Jan 31 09:16:42 compute-0 python3[30925]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:21:41 compute-0 sshd-session[30010]: Received disconnect from 38.102.83.5 port 47462:11: disconnected by user
Jan 31 09:21:41 compute-0 sshd-session[30010]: Disconnected from user zuul 38.102.83.5 port 47462
Jan 31 09:21:41 compute-0 sshd-session[30007]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:21:41 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 09:21:41 compute-0 systemd[1]: session-6.scope: Consumed 4.261s CPU time.
Jan 31 09:21:41 compute-0 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Jan 31 09:21:41 compute-0 systemd-logind[795]: Removed session 6.
Jan 31 09:30:39 compute-0 sshd-session[30931]: Accepted publickey for zuul from 192.168.122.30 port 54020 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:30:39 compute-0 systemd-logind[795]: New session 7 of user zuul.
Jan 31 09:30:39 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 31 09:30:39 compute-0 sshd-session[30931]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:30:40 compute-0 python3.9[31084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:30:41 compute-0 sudo[31263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxittnvsftrjhpjjafiujihjpfyizhsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851841.204766-27-89774579273564/AnsiballZ_command.py'
Jan 31 09:30:41 compute-0 sudo[31263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:30:41 compute-0 python3.9[31265]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:30:51 compute-0 sudo[31263]: pam_unix(sudo:session): session closed for user root
Jan 31 09:30:51 compute-0 sshd-session[30934]: Connection closed by 192.168.122.30 port 54020
Jan 31 09:30:52 compute-0 sshd-session[30931]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:30:52 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 09:30:52 compute-0 systemd[1]: session-7.scope: Consumed 8.135s CPU time.
Jan 31 09:30:52 compute-0 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Jan 31 09:30:52 compute-0 systemd-logind[795]: Removed session 7.
Jan 31 09:30:58 compute-0 sshd-session[31324]: Accepted publickey for zuul from 192.168.122.30 port 54252 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:30:58 compute-0 systemd-logind[795]: New session 8 of user zuul.
Jan 31 09:30:58 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 31 09:30:58 compute-0 sshd-session[31324]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:30:59 compute-0 python3.9[31477]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:30:59 compute-0 sshd-session[31327]: Connection closed by 192.168.122.30 port 54252
Jan 31 09:31:00 compute-0 sshd-session[31324]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:31:00 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 09:31:00 compute-0 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Jan 31 09:31:00 compute-0 systemd-logind[795]: Removed session 8.
Jan 31 09:31:16 compute-0 sshd-session[31505]: Accepted publickey for zuul from 192.168.122.30 port 54602 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:31:16 compute-0 systemd-logind[795]: New session 9 of user zuul.
Jan 31 09:31:16 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 31 09:31:16 compute-0 sshd-session[31505]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:31:16 compute-0 python3.9[31658]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 09:31:17 compute-0 python3.9[31832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:31:18 compute-0 sudo[31982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwiodfltokonyoeiumdctbxszysyhxul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851878.1931956-40-260280380191095/AnsiballZ_command.py'
Jan 31 09:31:18 compute-0 sudo[31982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:18 compute-0 python3.9[31984]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:31:18 compute-0 sudo[31982]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:19 compute-0 sudo[32135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxmmnequathbxynbrkexagnnltfwhrpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851879.0536015-52-108617502901246/AnsiballZ_stat.py'
Jan 31 09:31:19 compute-0 sudo[32135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:19 compute-0 python3.9[32137]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:31:19 compute-0 sudo[32135]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:20 compute-0 sudo[32287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guiijachfgpwkzgakyuucjwgpqlzvcnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851879.848915-60-55563242401783/AnsiballZ_file.py'
Jan 31 09:31:20 compute-0 sudo[32287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:20 compute-0 python3.9[32289]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:31:20 compute-0 sudo[32287]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:20 compute-0 sudo[32439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxtmqzoyqkwaqxoasswhwfeeeotxgruj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851880.55697-68-55911040872916/AnsiballZ_stat.py'
Jan 31 09:31:20 compute-0 sudo[32439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:20 compute-0 python3.9[32441]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:31:21 compute-0 sudo[32439]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:21 compute-0 sudo[32562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnjrpjffphgfkfftvmaayqjnyzlames ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851880.55697-68-55911040872916/AnsiballZ_copy.py'
Jan 31 09:31:21 compute-0 sudo[32562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:21 compute-0 python3.9[32564]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769851880.55697-68-55911040872916/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:31:21 compute-0 sudo[32562]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:21 compute-0 sudo[32714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnvbnhyhxhewenejyjwjceypyilipqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851881.7748344-83-244958857569214/AnsiballZ_setup.py'
Jan 31 09:31:21 compute-0 sudo[32714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:22 compute-0 python3.9[32716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:31:22 compute-0 sudo[32714]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:22 compute-0 sudo[32870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lafgmmnhnuucdpztvyujbxidcttqejpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851882.5888042-91-252763744886566/AnsiballZ_file.py'
Jan 31 09:31:22 compute-0 sudo[32870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:22 compute-0 python3.9[32872]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:31:23 compute-0 sudo[32870]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:23 compute-0 sudo[33022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgsgylllzhvqeejfulyjuiuelthixhxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851883.160245-100-74689444352978/AnsiballZ_file.py'
Jan 31 09:31:23 compute-0 sudo[33022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:23 compute-0 python3.9[33024]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:31:23 compute-0 sudo[33022]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:24 compute-0 python3.9[33174]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:31:29 compute-0 python3.9[33427]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:31:30 compute-0 python3.9[33577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:31:31 compute-0 python3.9[33731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:31:31 compute-0 sudo[33887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebopbmhxvrwrfmdfjqqqzrkeowsxdpaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851891.632317-148-175400721684741/AnsiballZ_setup.py'
Jan 31 09:31:31 compute-0 sudo[33887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:32 compute-0 python3.9[33889]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:31:32 compute-0 sudo[33887]: pam_unix(sudo:session): session closed for user root
Jan 31 09:31:32 compute-0 sudo[33971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxbpmqxvfgzjpetnheugwpvynyflgsuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769851891.632317-148-175400721684741/AnsiballZ_dnf.py'
Jan 31 09:31:32 compute-0 sudo[33971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:31:33 compute-0 python3.9[33973]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:32:01 compute-0 anacron[7484]: Job `cron.weekly' started
Jan 31 09:32:01 compute-0 anacron[7484]: Job `cron.weekly' terminated
Jan 31 09:32:25 compute-0 systemd[1]: Reloading.
Jan 31 09:32:25 compute-0 systemd-rc-local-generator[34170]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:32:25 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 09:32:27 compute-0 systemd[1]: Reloading.
Jan 31 09:32:27 compute-0 systemd-rc-local-generator[34215]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:32:27 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 09:32:27 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 09:32:27 compute-0 systemd[1]: Reloading.
Jan 31 09:32:27 compute-0 systemd-rc-local-generator[34251]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:32:28 compute-0 systemd[1]: Starting dnf makecache...
Jan 31 09:32:28 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 09:32:28 compute-0 dnf[34268]: Failed determining last makecache time.
Jan 31 09:32:28 compute-0 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Jan 31 09:32:28 compute-0 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-barbican-42b4c41831408a8e323 158 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-python-glean-642fffe0203a8ffcc2443db52 150 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-cinder-1c00d6490d88e436f26ef 152 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-python-stevedore-c4acc5639fd2329372142 163 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-python-cloudkitty-tests-tempest-783703 138 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-diskimage-builder-61b717cc45660834fe9a 151 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-nova-eaa65f0b85123a4ee343246 140 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-python-designate-tests-tempest-347fdbc 145 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-glance-1fd12c29b339f30fe823e 156 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 164 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-manila-d783d10e75495b73866db 158 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-neutron-95cadbd379667c8520c8 102 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-octavia-5975097dd4b021385178 164 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-watcher-c014f81a8647287f6dcc 164 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-python-tcib-78032d201b02cee27e8e644c61 158 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 124 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-swift-dc98a8463506ac520c469a  99 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-python-tempestconf-8515371b7cceebd4282  95 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: delorean-openstack-heat-ui-013accbfd179753bc3f0 105 kB/s | 3.0 kB     00:00
Jan 31 09:32:28 compute-0 dnf[34268]: CentOS Stream 9 - BaseOS                         48 kB/s | 6.1 kB     00:00
Jan 31 09:32:29 compute-0 dnf[34268]: CentOS Stream 9 - AppStream                      29 kB/s | 6.5 kB     00:00
Jan 31 09:32:29 compute-0 dnf[34268]: CentOS Stream 9 - CRB                            52 kB/s | 6.0 kB     00:00
Jan 31 09:32:29 compute-0 dnf[34268]: CentOS Stream 9 - Extras packages                50 kB/s | 7.3 kB     00:00
Jan 31 09:32:29 compute-0 dnf[34268]: dlrn-antelope-testing                            99 kB/s | 3.0 kB     00:00
Jan 31 09:32:29 compute-0 dnf[34268]: dlrn-antelope-build-deps                        166 kB/s | 3.0 kB     00:00
Jan 31 09:32:30 compute-0 dnf[34268]: centos9-rabbitmq                                2.5 kB/s | 3.0 kB     00:01
Jan 31 09:32:30 compute-0 dnf[34268]: centos9-storage                                 117 kB/s | 3.0 kB     00:00
Jan 31 09:32:30 compute-0 dnf[34268]: centos9-opstools                                122 kB/s | 3.0 kB     00:00
Jan 31 09:32:30 compute-0 dnf[34268]: NFV SIG OpenvSwitch                             118 kB/s | 3.0 kB     00:00
Jan 31 09:32:30 compute-0 dnf[34268]: repo-setup-centos-appstream                     174 kB/s | 4.4 kB     00:00
Jan 31 09:32:31 compute-0 dnf[34268]: repo-setup-centos-baseos                        157 kB/s | 3.9 kB     00:00
Jan 31 09:32:31 compute-0 dnf[34268]: repo-setup-centos-highavailability              157 kB/s | 3.9 kB     00:00
Jan 31 09:32:31 compute-0 dnf[34268]: repo-setup-centos-powertools                    215 kB/s | 4.3 kB     00:00
Jan 31 09:32:31 compute-0 dnf[34268]: Extra Packages for Enterprise Linux 9 - x86_64  234 kB/s |  31 kB     00:00
Jan 31 09:32:31 compute-0 dnf[34268]: Metadata cache created.
Jan 31 09:32:32 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 09:32:32 compute-0 systemd[1]: Finished dnf makecache.
Jan 31 09:32:32 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.872s CPU time.
Jan 31 09:33:55 compute-0 kernel: SELinux:  Converting 2728 SID table entries...
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:33:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:33:55 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 09:33:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:33:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:33:55 compute-0 systemd[1]: Reloading.
Jan 31 09:33:56 compute-0 systemd-rc-local-generator[34624]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:33:56 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:33:59 compute-0 sudo[33971]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:00 compute-0 sudo[35533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpgzkdwuexxkbyeoozirxftvbwujcwmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852040.0479176-160-143557205973060/AnsiballZ_command.py'
Jan 31 09:34:00 compute-0 sudo[35533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:00 compute-0 python3.9[35535]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:34:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:34:02 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.014s CPU time.
Jan 31 09:34:02 compute-0 systemd[1]: run-r23c36032acb64fe2928ec3cf0c028b68.service: Deactivated successfully.
Jan 31 09:34:02 compute-0 sudo[35533]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:03 compute-0 sudo[35815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-posbbxgumpxtilpwzosgmpenoxnacdtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852043.0497746-168-25936431528100/AnsiballZ_selinux.py'
Jan 31 09:34:03 compute-0 sudo[35815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:04 compute-0 python3.9[35817]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 09:34:04 compute-0 sudo[35815]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:04 compute-0 sudo[35967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prigsxkeiziffvjvjkibnndgfpriuque ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852044.5834389-179-266628819275596/AnsiballZ_command.py'
Jan 31 09:34:04 compute-0 sudo[35967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:05 compute-0 python3.9[35969]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 09:34:05 compute-0 sudo[35967]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:06 compute-0 sudo[36122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-takkyczdbsgiwefcwulpvdovoxjnqafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852045.8190882-187-205987072262711/AnsiballZ_file.py'
Jan 31 09:34:06 compute-0 sudo[36122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:11 compute-0 python3.9[36124]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:34:11 compute-0 sudo[36122]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:12 compute-0 sudo[36274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svyjxjkrgubogxlpaokeqcaqeukzayki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852051.8017373-195-129309912184024/AnsiballZ_mount.py'
Jan 31 09:34:12 compute-0 sudo[36274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:17 compute-0 python3.9[36276]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 09:34:17 compute-0 sudo[36274]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:18 compute-0 sudo[36426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chvuscaxmbzmxuqjugoqmzkudlirdvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852058.534977-223-150138971494068/AnsiballZ_file.py'
Jan 31 09:34:18 compute-0 sudo[36426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:19 compute-0 python3.9[36428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:34:19 compute-0 sudo[36426]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:19 compute-0 sudo[36578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oulflkhrqdqpwdnxwqubqsywzwqidaxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852059.1691434-231-3501037775153/AnsiballZ_stat.py'
Jan 31 09:34:19 compute-0 sudo[36578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:19 compute-0 python3.9[36580]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:34:19 compute-0 sudo[36578]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:19 compute-0 sudo[36701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ridmzxpsaigpmveeafqpyrznbkaduwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852059.1691434-231-3501037775153/AnsiballZ_copy.py'
Jan 31 09:34:19 compute-0 sudo[36701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:20 compute-0 python3.9[36703]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852059.1691434-231-3501037775153/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:34:20 compute-0 sudo[36701]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:20 compute-0 sudo[36853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhwhhgfbhqllthviczusdoshbtqqbsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852060.5909734-255-146080029043057/AnsiballZ_stat.py'
Jan 31 09:34:20 compute-0 sudo[36853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:21 compute-0 python3.9[36855]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:34:21 compute-0 sudo[36853]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:21 compute-0 sudo[37005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckauqbuejtijcuekqzktoaxspclsnzzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852061.1750271-263-172843195571031/AnsiballZ_command.py'
Jan 31 09:34:21 compute-0 sudo[37005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:21 compute-0 python3.9[37007]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:21 compute-0 sudo[37005]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:22 compute-0 sudo[37158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etmybufiuawqtrijpgarmraafnigcipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852061.808531-271-160701063848566/AnsiballZ_file.py'
Jan 31 09:34:22 compute-0 sudo[37158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:22 compute-0 python3.9[37160]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:34:22 compute-0 sudo[37158]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:23 compute-0 sudo[37310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcqahoplekxpdoeapilxlgyouqlsnfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852062.7322385-282-216348975233956/AnsiballZ_getent.py'
Jan 31 09:34:23 compute-0 sudo[37310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:23 compute-0 python3.9[37312]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 09:34:23 compute-0 sudo[37310]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:23 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:34:23 compute-0 sudo[37464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxqvpmwfovdrdqukpgfjfcuavarqdhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852063.5271268-290-1910051792734/AnsiballZ_group.py'
Jan 31 09:34:23 compute-0 sudo[37464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:24 compute-0 python3.9[37466]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 09:34:24 compute-0 groupadd[37467]: group added to /etc/group: name=qemu, GID=107
Jan 31 09:34:24 compute-0 groupadd[37467]: group added to /etc/gshadow: name=qemu
Jan 31 09:34:24 compute-0 groupadd[37467]: new group: name=qemu, GID=107
Jan 31 09:34:24 compute-0 sudo[37464]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:25 compute-0 sudo[37622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgprsoajvzpmnddzsaadwrdfgtojcyqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852064.9334986-298-181653562556086/AnsiballZ_user.py'
Jan 31 09:34:25 compute-0 sudo[37622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:25 compute-0 python3.9[37624]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 09:34:25 compute-0 useradd[37626]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 09:34:25 compute-0 sudo[37622]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:26 compute-0 sudo[37782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpyjcbvttdcsrdbvjbidnxxaldkcwltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852066.071513-306-64337988167653/AnsiballZ_getent.py'
Jan 31 09:34:26 compute-0 sudo[37782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:26 compute-0 python3.9[37784]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 09:34:26 compute-0 sudo[37782]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:27 compute-0 sudo[37935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqxdupaumiyawwigsoztochpnitoqpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852066.8264575-314-56445278438128/AnsiballZ_group.py'
Jan 31 09:34:27 compute-0 sudo[37935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:27 compute-0 python3.9[37937]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 09:34:27 compute-0 groupadd[37938]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 09:34:27 compute-0 groupadd[37938]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 09:34:27 compute-0 groupadd[37938]: new group: name=hugetlbfs, GID=42477
Jan 31 09:34:27 compute-0 sudo[37935]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:27 compute-0 sudo[38093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebumchvwmtbsdftviznlgvnmflddnsqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852067.649697-323-190849221361208/AnsiballZ_file.py'
Jan 31 09:34:27 compute-0 sudo[38093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:28 compute-0 python3.9[38095]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 09:34:28 compute-0 sudo[38093]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:28 compute-0 sudo[38245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghoilrgsutwaxiszxfavxykptctpxyzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852068.3714879-334-218980201064741/AnsiballZ_dnf.py'
Jan 31 09:34:28 compute-0 sudo[38245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:28 compute-0 python3.9[38247]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:34:33 compute-0 sudo[38245]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:33 compute-0 sudo[38399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcfhroutrzuezwnestsygpexjvqlltno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852073.315244-342-242047921718351/AnsiballZ_file.py'
Jan 31 09:34:33 compute-0 sudo[38399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:33 compute-0 python3.9[38401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:34:33 compute-0 sudo[38399]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:34 compute-0 sudo[38551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfndhjcjvysrxkizmqboqgcgzlbrrgpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852073.9659362-350-219814928283920/AnsiballZ_stat.py'
Jan 31 09:34:34 compute-0 sudo[38551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:34 compute-0 python3.9[38553]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:34:34 compute-0 sudo[38551]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:34 compute-0 sudo[38674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcfwttciiwmlvuquejwhnikuekvrjpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852073.9659362-350-219814928283920/AnsiballZ_copy.py'
Jan 31 09:34:34 compute-0 sudo[38674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:34 compute-0 python3.9[38676]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852073.9659362-350-219814928283920/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:34:34 compute-0 sudo[38674]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:35 compute-0 sudo[38826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkigkbvqdzdiivyndilhnjfygimuaach ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852075.1044052-365-221953884715403/AnsiballZ_systemd.py'
Jan 31 09:34:35 compute-0 sudo[38826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:36 compute-0 python3.9[38828]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:34:36 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 09:34:36 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 09:34:36 compute-0 kernel: Bridge firewalling registered
Jan 31 09:34:36 compute-0 systemd-modules-load[38832]: Inserted module 'br_netfilter'
Jan 31 09:34:36 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 09:34:36 compute-0 sudo[38826]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:36 compute-0 sudo[38985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkskipdmeetgszdlypwrxeyqfrkjabyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852076.4281206-373-271958937407827/AnsiballZ_stat.py'
Jan 31 09:34:36 compute-0 sudo[38985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:36 compute-0 python3.9[38987]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:34:36 compute-0 sudo[38985]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:37 compute-0 sudo[39108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeibzfmtmlgpdmwutvchtorbekknfrwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852076.4281206-373-271958937407827/AnsiballZ_copy.py'
Jan 31 09:34:37 compute-0 sudo[39108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:37 compute-0 python3.9[39110]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852076.4281206-373-271958937407827/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:34:37 compute-0 sudo[39108]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:37 compute-0 sudo[39260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnneoqqhkcuzowkfvsjipnobfzrvstwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852077.662614-391-179778994045024/AnsiballZ_dnf.py'
Jan 31 09:34:37 compute-0 sudo[39260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:38 compute-0 python3.9[39262]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:34:43 compute-0 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Jan 31 09:34:43 compute-0 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Jan 31 09:34:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:34:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:34:43 compute-0 systemd[1]: Reloading.
Jan 31 09:34:44 compute-0 systemd-rc-local-generator[39325]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:34:44 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:34:45 compute-0 sudo[39260]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:46 compute-0 python3.9[41052]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:34:46 compute-0 python3.9[42144]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 09:34:47 compute-0 python3.9[43037]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:34:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:34:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:34:47 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.635s CPU time.
Jan 31 09:34:47 compute-0 systemd[1]: run-r5074e7ad4d674e2997a2dfdf9d06d1fc.service: Deactivated successfully.
Jan 31 09:34:47 compute-0 sudo[43484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxbjzmlnskvlhrpepbqrdjhtbzxjoeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852087.7456307-430-137336479230433/AnsiballZ_command.py'
Jan 31 09:34:47 compute-0 sudo[43484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:48 compute-0 python3.9[43486]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:48 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 09:34:48 compute-0 systemd[1]: Starting Authorization Manager...
Jan 31 09:34:48 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 09:34:48 compute-0 polkitd[43703]: Started polkitd version 0.117
Jan 31 09:34:48 compute-0 polkitd[43703]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 09:34:48 compute-0 polkitd[43703]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 09:34:48 compute-0 polkitd[43703]: Finished loading, compiling and executing 2 rules
Jan 31 09:34:48 compute-0 systemd[1]: Started Authorization Manager.
Jan 31 09:34:48 compute-0 polkitd[43703]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 09:34:48 compute-0 sudo[43484]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:49 compute-0 sudo[43871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppdmerajvvneiatgzuhdphrrmtcrzfpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852089.0401042-439-56034812998542/AnsiballZ_systemd.py'
Jan 31 09:34:49 compute-0 sudo[43871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:49 compute-0 python3.9[43873]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:34:49 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 09:34:49 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 09:34:49 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 09:34:49 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 09:34:49 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 09:34:49 compute-0 sudo[43871]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:50 compute-0 python3.9[44034]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 09:34:52 compute-0 sudo[44184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykamaunzcmeppemndjkwhtumfmufjrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852092.0133517-496-186864483934849/AnsiballZ_systemd.py'
Jan 31 09:34:52 compute-0 sudo[44184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:52 compute-0 python3.9[44186]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:34:52 compute-0 systemd[1]: Reloading.
Jan 31 09:34:52 compute-0 systemd-rc-local-generator[44211]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:34:52 compute-0 sudo[44184]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:53 compute-0 sudo[44373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vixvqmguhmlrfjhotwqnhklkajbzngpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852092.8935685-496-259714064129803/AnsiballZ_systemd.py'
Jan 31 09:34:53 compute-0 sudo[44373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:53 compute-0 python3.9[44375]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:34:53 compute-0 systemd[1]: Reloading.
Jan 31 09:34:53 compute-0 systemd-rc-local-generator[44404]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:34:53 compute-0 sudo[44373]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:54 compute-0 sudo[44562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmnjpmupujkenqbguiacfrotajvyzqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852093.929183-512-255234897052198/AnsiballZ_command.py'
Jan 31 09:34:54 compute-0 sudo[44562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:54 compute-0 python3.9[44564]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:54 compute-0 sudo[44562]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:54 compute-0 sudo[44715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yukoxhuclmrnkmkjigtubdeebyfxmdoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852094.5421834-520-27635741094689/AnsiballZ_command.py'
Jan 31 09:34:54 compute-0 sudo[44715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:54 compute-0 python3.9[44717]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:54 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 09:34:55 compute-0 sudo[44715]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:55 compute-0 sudo[44868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sboyuvnnpdrliohygjaxcpdfphnakdhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852095.1674125-528-66443128241000/AnsiballZ_command.py'
Jan 31 09:34:55 compute-0 sudo[44868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:55 compute-0 python3.9[44870]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:57 compute-0 sudo[44868]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:57 compute-0 sudo[45030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybywyccdlmnpsuwayjpfjggcbwomtlow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852097.2367082-536-42178471058489/AnsiballZ_command.py'
Jan 31 09:34:57 compute-0 sudo[45030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:57 compute-0 python3.9[45032]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:34:57 compute-0 sudo[45030]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:58 compute-0 sudo[45183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmvqvhtddhfnzfodcihkbdwbbkbvzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852098.1344802-544-195529327275104/AnsiballZ_systemd.py'
Jan 31 09:34:58 compute-0 sudo[45183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:34:58 compute-0 python3.9[45185]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:34:58 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 09:34:58 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 09:34:58 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 09:34:58 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 31 09:34:58 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 09:34:58 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 31 09:34:58 compute-0 sudo[45183]: pam_unix(sudo:session): session closed for user root
Jan 31 09:34:59 compute-0 sshd-session[31508]: Connection closed by 192.168.122.30 port 54602
Jan 31 09:34:59 compute-0 sshd-session[31505]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:34:59 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 09:34:59 compute-0 systemd[1]: session-9.scope: Consumed 2min 4.843s CPU time.
Jan 31 09:34:59 compute-0 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Jan 31 09:34:59 compute-0 systemd-logind[795]: Removed session 9.
Jan 31 09:35:05 compute-0 sshd-session[45216]: Accepted publickey for zuul from 192.168.122.30 port 40706 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:35:05 compute-0 systemd-logind[795]: New session 10 of user zuul.
Jan 31 09:35:05 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 31 09:35:05 compute-0 sshd-session[45216]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:35:06 compute-0 python3.9[45369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:35:07 compute-0 python3.9[45523]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:35:08 compute-0 sudo[45677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfboydpxwpbvczdhxlucmruaocldkyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852108.5495443-45-273407265604797/AnsiballZ_command.py'
Jan 31 09:35:08 compute-0 sudo[45677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:09 compute-0 python3.9[45679]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:35:09 compute-0 sudo[45677]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:09 compute-0 python3.9[45830]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:35:10 compute-0 sudo[45984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjlrxoujqxouxhzswsaezdmywsfxzybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852110.3260942-65-146447821483649/AnsiballZ_setup.py'
Jan 31 09:35:10 compute-0 sudo[45984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:10 compute-0 python3.9[45986]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:35:11 compute-0 sudo[45984]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:11 compute-0 sudo[46068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqkhbjkzvjzlrlegvyhzivwywyqvhui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852110.3260942-65-146447821483649/AnsiballZ_dnf.py'
Jan 31 09:35:11 compute-0 sudo[46068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:11 compute-0 python3.9[46070]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:35:12 compute-0 sudo[46068]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:13 compute-0 sudo[46221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slbidqlihlkosvzdsuhwyixzxchjjsiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852113.0248852-77-214152849771449/AnsiballZ_setup.py'
Jan 31 09:35:13 compute-0 sudo[46221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:13 compute-0 python3.9[46223]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:35:13 compute-0 sudo[46221]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:14 compute-0 sudo[46392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrkqdvsynxpizcgnggkdfoirgkglveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852113.8901877-88-37886591285591/AnsiballZ_file.py'
Jan 31 09:35:14 compute-0 sudo[46392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:14 compute-0 python3.9[46394]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:35:14 compute-0 sudo[46392]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:15 compute-0 sudo[46544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cztqfrxbblncqsvyfaumgmqkanofiwva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852114.7951884-96-281298833331242/AnsiballZ_command.py'
Jan 31 09:35:15 compute-0 sudo[46544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:15 compute-0 python3.9[46546]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:35:15 compute-0 podman[46547]: 2026-01-31 09:35:15.334716316 +0000 UTC m=+0.073836872 system refresh
Jan 31 09:35:15 compute-0 sudo[46544]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:15 compute-0 sudo[46705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmyypelrqihkluphkyvxvfmmasndrzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852115.5245256-104-160302314448223/AnsiballZ_stat.py'
Jan 31 09:35:15 compute-0 sudo[46705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:16 compute-0 python3.9[46707]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:35:16 compute-0 sudo[46705]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:35:16 compute-0 sudo[46828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrkbkeevgamppwxsnyzseynsipdtjdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852115.5245256-104-160302314448223/AnsiballZ_copy.py'
Jan 31 09:35:16 compute-0 sudo[46828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:16 compute-0 python3.9[46830]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852115.5245256-104-160302314448223/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4d2cdfd0dd75fe77b1c7b92177ea63f169086168 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:35:16 compute-0 sudo[46828]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:17 compute-0 sudo[46980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcndqqjcajddlozfmhxmibbspjfvzafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852117.1300604-119-133055366275042/AnsiballZ_stat.py'
Jan 31 09:35:17 compute-0 sudo[46980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:17 compute-0 python3.9[46982]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:35:17 compute-0 sudo[46980]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:17 compute-0 sudo[47103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjeimurtyuoggztqbmdczrxtbputsjca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852117.1300604-119-133055366275042/AnsiballZ_copy.py'
Jan 31 09:35:17 compute-0 sudo[47103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:18 compute-0 python3.9[47105]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852117.1300604-119-133055366275042/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b8d6c3ecdde7ab37ac3542f94ac2c2aa799ca863 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:35:18 compute-0 sudo[47103]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:18 compute-0 sudo[47255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmzepwarxezxkuqapfkxreuskloqlzvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852118.240527-135-69515640817167/AnsiballZ_ini_file.py'
Jan 31 09:35:18 compute-0 sudo[47255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:18 compute-0 python3.9[47257]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:35:18 compute-0 sudo[47255]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:19 compute-0 sudo[47407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbodejahjmuskbjxsponklnxxdwpkole ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852118.9751086-135-169326931734135/AnsiballZ_ini_file.py'
Jan 31 09:35:19 compute-0 sudo[47407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:19 compute-0 python3.9[47409]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:35:19 compute-0 sudo[47407]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:19 compute-0 sudo[47559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjinuaxftvgozxkaualvrcvqgdlmexr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852119.5286589-135-24754037761666/AnsiballZ_ini_file.py'
Jan 31 09:35:19 compute-0 sudo[47559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:20 compute-0 python3.9[47561]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:35:20 compute-0 sudo[47559]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:20 compute-0 sudo[47711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srzievzulnssapvrszttgzxbfzcoqgau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852120.2700946-135-148984509202138/AnsiballZ_ini_file.py'
Jan 31 09:35:20 compute-0 sudo[47711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:20 compute-0 python3.9[47713]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:35:20 compute-0 sudo[47711]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:21 compute-0 python3.9[47863]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:35:22 compute-0 sudo[48015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqkgdubosftqmpouzaaesqyhsxktrvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852121.9143724-175-161351417221876/AnsiballZ_dnf.py'
Jan 31 09:35:22 compute-0 sudo[48015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:22 compute-0 python3.9[48017]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:23 compute-0 sudo[48015]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:24 compute-0 sudo[48168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quktivpdiqyctebowxeanowkcvbhomhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852124.0187802-183-4618640978088/AnsiballZ_dnf.py'
Jan 31 09:35:24 compute-0 sudo[48168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:24 compute-0 python3.9[48170]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:26 compute-0 sudo[48168]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:26 compute-0 sudo[48329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brijaolwhyjncwalwkiiokivjjbkpugx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852126.26058-193-48472812842974/AnsiballZ_dnf.py'
Jan 31 09:35:26 compute-0 sudo[48329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:26 compute-0 python3.9[48331]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:28 compute-0 sudo[48329]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:28 compute-0 sudo[48482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvdvihtvvzxjjdrdjfuxsmpqezkeirl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852128.2694418-202-224992083000824/AnsiballZ_dnf.py'
Jan 31 09:35:28 compute-0 sudo[48482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:28 compute-0 python3.9[48484]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:29 compute-0 sudo[48482]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:30 compute-0 sudo[48635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnnihnxgiijqisxdxrsfewmseoawfwkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852130.1863356-213-38185744635590/AnsiballZ_dnf.py'
Jan 31 09:35:30 compute-0 sudo[48635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:30 compute-0 python3.9[48637]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:32 compute-0 sudo[48635]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:32 compute-0 sudo[48791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbppqqsdhjvpbfeeednnbabphkqpzlvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852132.3365085-221-71888942237340/AnsiballZ_dnf.py'
Jan 31 09:35:32 compute-0 sudo[48791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:32 compute-0 python3.9[48793]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:35 compute-0 sudo[48791]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:35 compute-0 sudo[48960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfgvzxlirgdcqlxmiqikowrgqluvhmud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852135.5454323-230-261150143930912/AnsiballZ_dnf.py'
Jan 31 09:35:35 compute-0 sudo[48960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:35 compute-0 python3.9[48962]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:37 compute-0 sudo[48960]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:37 compute-0 sudo[49113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfrmhdqekidlyzemzrbkzonselkoyywk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852137.3753467-239-188530379558796/AnsiballZ_dnf.py'
Jan 31 09:35:37 compute-0 sudo[49113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:37 compute-0 python3.9[49115]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:45 compute-0 sudo[49113]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:46 compute-0 sudo[49449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynmbgyptmeccjbzgnusynuloiwepzkeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852146.1673968-248-180666215882697/AnsiballZ_dnf.py'
Jan 31 09:35:47 compute-0 sudo[49449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:47 compute-0 python3.9[49451]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:48 compute-0 sudo[49449]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:49 compute-0 sudo[49605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfwjjwdoojikwfbqvytlutwwfxfobuid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852148.917382-258-210534486562051/AnsiballZ_dnf.py'
Jan 31 09:35:49 compute-0 sudo[49605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:49 compute-0 python3.9[49607]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:35:51 compute-0 sudo[49605]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:51 compute-0 sudo[49762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bndvqvgqitutlrnewqwfimmmbqdzaskc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852151.5018797-269-223529896300435/AnsiballZ_file.py'
Jan 31 09:35:51 compute-0 sudo[49762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:51 compute-0 python3.9[49764]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:35:52 compute-0 sudo[49762]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:52 compute-0 sudo[49937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuvhuglhskakwyivdfwykgldwxrwtlvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852152.140859-277-3900905542724/AnsiballZ_stat.py'
Jan 31 09:35:52 compute-0 sudo[49937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:52 compute-0 python3.9[49939]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:35:52 compute-0 sudo[49937]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:52 compute-0 sudo[50060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfnalejrvcsqyepmyepayywqngwzdvjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852152.140859-277-3900905542724/AnsiballZ_copy.py'
Jan 31 09:35:52 compute-0 sudo[50060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:53 compute-0 python3.9[50062]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769852152.140859-277-3900905542724/.source.json _original_basename=.i20k055d follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:35:53 compute-0 sudo[50060]: pam_unix(sudo:session): session closed for user root
Jan 31 09:35:53 compute-0 sudo[50212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwpqenaadfjbzhifrivcrperiyaxknbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852153.4028258-295-12659827429579/AnsiballZ_podman_image.py'
Jan 31 09:35:53 compute-0 sudo[50212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:35:54 compute-0 python3.9[50214]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:35:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat436556688-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 09:36:02 compute-0 podman[50227]: 2026-01-31 09:36:02.020985617 +0000 UTC m=+7.886449653 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 09:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:02 compute-0 sudo[50212]: pam_unix(sudo:session): session closed for user root
Jan 31 09:36:02 compute-0 sudo[50524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojubjiiaybqtenysvlubezfbyuzrblsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852162.5219722-306-97905519203765/AnsiballZ_podman_image.py'
Jan 31 09:36:02 compute-0 sudo[50524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:36:03 compute-0 python3.9[50526]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:19 compute-0 podman[50538]: 2026-01-31 09:36:19.790635946 +0000 UTC m=+16.753997725 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:19 compute-0 sudo[50524]: pam_unix(sudo:session): session closed for user root
Jan 31 09:36:20 compute-0 sudo[50833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjwinncukuxmwatscxvrzvehqrabldsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852180.1956565-316-211999068098501/AnsiballZ_podman_image.py'
Jan 31 09:36:20 compute-0 sudo[50833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:36:20 compute-0 python3.9[50835]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:36:36 compute-0 podman[50847]: 2026-01-31 09:36:36.724683512 +0000 UTC m=+16.071258973 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 09:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:36 compute-0 sudo[50833]: pam_unix(sudo:session): session closed for user root
Jan 31 09:36:37 compute-0 sudo[51111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqyhckexozxtmsuecdjutvdmpqcgpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852197.2552063-327-255062514597826/AnsiballZ_podman_image.py'
Jan 31 09:36:37 compute-0 sudo[51111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:36:37 compute-0 python3.9[51113]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:52 compute-0 podman[51126]: 2026-01-31 09:36:52.845371866 +0000 UTC m=+15.093588886 image pull 29d56e26655b85dbb8adac3e1ab61f6d15a43ab7cc871b995898a25601dc084c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 31 09:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:53 compute-0 sudo[51111]: pam_unix(sudo:session): session closed for user root
Jan 31 09:36:53 compute-0 sudo[51467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhgkongqtyseszelbshlcnbquzegahoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852213.118024-327-15588980428898/AnsiballZ_podman_image.py'
Jan 31 09:36:53 compute-0 sudo[51467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:36:53 compute-0 python3.9[51469]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:36:54 compute-0 podman[51480]: 2026-01-31 09:36:54.67275323 +0000 UTC m=+1.067849547 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 31 09:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:54 compute-0 sudo[51467]: pam_unix(sudo:session): session closed for user root
Jan 31 09:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:55 compute-0 sudo[51751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyuxswfofldkncjxdcaeogpzdlfbwkwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852215.0602653-343-5614795693342/AnsiballZ_podman_image.py'
Jan 31 09:36:55 compute-0 sudo[51751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:36:55 compute-0 python3.9[51753]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:57 compute-0 podman[51765]: 2026-01-31 09:36:57.939834322 +0000 UTC m=+2.381293367 image pull 5a0c248a731dc2e1754b1906fede374f0f92203547e5b10eb435ef1a64b36296 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 31 09:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:36:58 compute-0 sudo[51751]: pam_unix(sudo:session): session closed for user root
Jan 31 09:36:58 compute-0 sudo[52023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilhibmhiaxlefotdexgcdnbrntkbtlqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852218.2116725-343-265504569966520/AnsiballZ_podman_image.py'
Jan 31 09:36:58 compute-0 sudo[52023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:36:58 compute-0 python3.9[52025]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 31 09:37:10 compute-0 podman[52036]: 2026-01-31 09:37:10.822192283 +0000 UTC m=+12.150182997 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 31 09:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:37:11 compute-0 sudo[52023]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:11 compute-0 sshd-session[45219]: Connection closed by 192.168.122.30 port 40706
Jan 31 09:37:11 compute-0 sshd-session[45216]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:37:11 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 09:37:11 compute-0 systemd[1]: session-10.scope: Consumed 2min 5.778s CPU time.
Jan 31 09:37:11 compute-0 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Jan 31 09:37:11 compute-0 systemd-logind[795]: Removed session 10.
Jan 31 09:37:18 compute-0 sshd-session[52285]: Accepted publickey for zuul from 192.168.122.30 port 44286 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:37:18 compute-0 systemd-logind[795]: New session 11 of user zuul.
Jan 31 09:37:18 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 31 09:37:18 compute-0 sshd-session[52285]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:37:19 compute-0 python3.9[52438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:37:20 compute-0 sudo[52592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-horbpbrjbunjbcwvrbykesgcdoseapdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852240.159223-31-1464648974618/AnsiballZ_getent.py'
Jan 31 09:37:20 compute-0 sudo[52592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:20 compute-0 python3.9[52594]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 09:37:20 compute-0 sudo[52592]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:21 compute-0 sudo[52745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsfzyihtjfntgwjotzvbjismwverkpdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852240.9552443-39-237374973644523/AnsiballZ_group.py'
Jan 31 09:37:21 compute-0 sudo[52745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:21 compute-0 python3.9[52747]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 09:37:21 compute-0 groupadd[52748]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 09:37:21 compute-0 groupadd[52748]: group added to /etc/gshadow: name=openvswitch
Jan 31 09:37:21 compute-0 groupadd[52748]: new group: name=openvswitch, GID=42476
Jan 31 09:37:21 compute-0 sudo[52745]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:22 compute-0 sudo[52903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoyhintuvfxlkuiuasmyjtfmtlryqtog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852241.787529-47-131755396341140/AnsiballZ_user.py'
Jan 31 09:37:22 compute-0 sudo[52903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:22 compute-0 python3.9[52905]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 09:37:22 compute-0 useradd[52907]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 09:37:22 compute-0 useradd[52907]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 09:37:22 compute-0 useradd[52907]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 09:37:23 compute-0 sudo[52903]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:24 compute-0 sudo[53063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxakrcohxvnvfuirxvgkqkdurqfcikdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852243.9343243-57-40863841396819/AnsiballZ_setup.py'
Jan 31 09:37:24 compute-0 sudo[53063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:24 compute-0 python3.9[53065]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:37:24 compute-0 sudo[53063]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:25 compute-0 sudo[53147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rydzmkgouqapgtyddchvesvaycvwpupy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852243.9343243-57-40863841396819/AnsiballZ_dnf.py'
Jan 31 09:37:25 compute-0 sudo[53147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:25 compute-0 python3.9[53149]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:37:26 compute-0 sudo[53147]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:27 compute-0 sudo[53310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljoyxjjqobnyuzcxucjmyjuauinnyiim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852247.0431414-71-104582441182722/AnsiballZ_dnf.py'
Jan 31 09:37:27 compute-0 sudo[53310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:27 compute-0 python3.9[53312]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:37:38 compute-0 kernel: SELinux:  Converting 2741 SID table entries...
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:37:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:37:38 compute-0 groupadd[53335]: group added to /etc/group: name=unbound, GID=994
Jan 31 09:37:38 compute-0 groupadd[53335]: group added to /etc/gshadow: name=unbound
Jan 31 09:37:38 compute-0 groupadd[53335]: new group: name=unbound, GID=994
Jan 31 09:37:38 compute-0 useradd[53342]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 09:37:38 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 09:37:38 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 09:37:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:37:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:37:39 compute-0 systemd[1]: Reloading.
Jan 31 09:37:39 compute-0 systemd-sysv-generator[53841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:37:39 compute-0 systemd-rc-local-generator[53838]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:37:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:37:40 compute-0 sudo[53310]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:37:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:37:40 compute-0 systemd[1]: run-r215c27d467d741b99027720304e6050a.service: Deactivated successfully.
Jan 31 09:37:41 compute-0 sudo[54408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syawodeelgytffhsldvyvlhblgooiwch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852260.6455328-79-166776678193521/AnsiballZ_systemd.py'
Jan 31 09:37:41 compute-0 sudo[54408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:41 compute-0 python3.9[54410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:37:41 compute-0 systemd[1]: Reloading.
Jan 31 09:37:41 compute-0 systemd-rc-local-generator[54436]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:37:41 compute-0 systemd-sysv-generator[54440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:37:41 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 09:37:41 compute-0 chown[54451]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 09:37:41 compute-0 ovs-ctl[54456]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 09:37:41 compute-0 ovs-ctl[54456]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 09:37:41 compute-0 ovs-ctl[54456]: Starting ovsdb-server [  OK  ]
Jan 31 09:37:41 compute-0 ovs-vsctl[54505]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 09:37:42 compute-0 ovs-vsctl[54524]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"089e34f1-a6ad-49ae-8ce3-e9f7773bc2da\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 09:37:42 compute-0 ovs-ctl[54456]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 09:37:42 compute-0 ovs-ctl[54456]: Enabling remote OVSDB managers [  OK  ]
Jan 31 09:37:42 compute-0 ovs-vsctl[54530]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 09:37:42 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 09:37:42 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 09:37:42 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 09:37:42 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 09:37:42 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 09:37:42 compute-0 ovs-ctl[54575]: Inserting openvswitch module [  OK  ]
Jan 31 09:37:42 compute-0 ovs-ctl[54544]: Starting ovs-vswitchd [  OK  ]
Jan 31 09:37:42 compute-0 ovs-ctl[54544]: Enabling remote OVSDB managers [  OK  ]
Jan 31 09:37:42 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 09:37:42 compute-0 ovs-vsctl[54593]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 09:37:42 compute-0 systemd[1]: Starting Open vSwitch...
Jan 31 09:37:42 compute-0 systemd[1]: Finished Open vSwitch.
Jan 31 09:37:42 compute-0 sudo[54408]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:43 compute-0 python3.9[54744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:37:43 compute-0 sudo[54894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspsrplxzpwcvfxvglucicaslulobkqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852263.4755316-97-172982007435387/AnsiballZ_sefcontext.py'
Jan 31 09:37:43 compute-0 sudo[54894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:44 compute-0 python3.9[54896]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 09:37:45 compute-0 kernel: SELinux:  Converting 2755 SID table entries...
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:37:45 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:37:45 compute-0 sudo[54894]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:46 compute-0 python3.9[55051]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:37:46 compute-0 sudo[55207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secklwlbwsujdbvefedvyczimusqnthu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852266.748592-115-242907591204092/AnsiballZ_dnf.py'
Jan 31 09:37:46 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 09:37:46 compute-0 sudo[55207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:47 compute-0 python3.9[55209]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:37:49 compute-0 sudo[55207]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:50 compute-0 sudo[55360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbdjwberwiukhfffnykablxlclyddhct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852269.709135-123-81079584343321/AnsiballZ_command.py'
Jan 31 09:37:50 compute-0 sudo[55360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:50 compute-0 python3.9[55362]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:37:50 compute-0 sudo[55360]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:51 compute-0 sudo[55647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krvxdigyfpzauwlrfchfuqqgvwmaqpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852271.216243-131-76324728977285/AnsiballZ_file.py'
Jan 31 09:37:51 compute-0 sudo[55647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:51 compute-0 python3.9[55649]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 09:37:51 compute-0 sudo[55647]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:52 compute-0 python3.9[55799]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:37:52 compute-0 sudo[55951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqianftzvgqjrihidhwipcashdouidgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852272.7468262-147-38641998609676/AnsiballZ_dnf.py'
Jan 31 09:37:52 compute-0 sudo[55951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:53 compute-0 python3.9[55953]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:37:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:37:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:37:54 compute-0 systemd[1]: Reloading.
Jan 31 09:37:54 compute-0 systemd-rc-local-generator[55988]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:37:54 compute-0 systemd-sysv-generator[55991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:37:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:37:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:37:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:37:55 compute-0 systemd[1]: run-r96160e5d78c94dbd9ff0b357f5e5730f.service: Deactivated successfully.
Jan 31 09:37:55 compute-0 sudo[55951]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:55 compute-0 sudo[56268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arnwdajdrizvbrigdclrtchghetvcwfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852275.4108124-155-70928365965913/AnsiballZ_systemd.py'
Jan 31 09:37:55 compute-0 sudo[56268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:56 compute-0 python3.9[56270]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:37:56 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 09:37:56 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 09:37:56 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 09:37:56 compute-0 systemd[1]: Stopping Network Manager...
Jan 31 09:37:56 compute-0 NetworkManager[7191]: <info>  [1769852276.1503] caught SIGTERM, shutting down normally.
Jan 31 09:37:56 compute-0 NetworkManager[7191]: <info>  [1769852276.1515] dhcp4 (eth0): canceled DHCP transaction
Jan 31 09:37:56 compute-0 NetworkManager[7191]: <info>  [1769852276.1516] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 09:37:56 compute-0 NetworkManager[7191]: <info>  [1769852276.1516] dhcp4 (eth0): state changed no lease
Jan 31 09:37:56 compute-0 NetworkManager[7191]: <info>  [1769852276.1518] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 09:37:56 compute-0 NetworkManager[7191]: <info>  [1769852276.1579] exiting (success)
Jan 31 09:37:56 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 09:37:56 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 09:37:56 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 09:37:56 compute-0 systemd[1]: Stopped Network Manager.
Jan 31 09:37:56 compute-0 systemd[1]: NetworkManager.service: Consumed 19.514s CPU time, 4.3M memory peak, read 0B from disk, written 18.5K to disk.
Jan 31 09:37:56 compute-0 systemd[1]: Starting Network Manager...
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2100] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:6687f1f7-7543-41e0-ab3e-fbff009a01ab)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2102] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2151] manager[0x5632196c7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 09:37:56 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 09:37:56 compute-0 systemd[1]: Started Hostname Service.
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2720] hostname: hostname: using hostnamed
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2720] hostname: static hostname changed from (none) to "compute-0"
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2724] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2727] manager[0x5632196c7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2727] manager[0x5632196c7000]: rfkill: WWAN hardware radio set enabled
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2745] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2753] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2753] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2754] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2754] manager: Networking is enabled by state file
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2756] settings: Loaded settings plugin: keyfile (internal)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2758] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2788] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2796] dhcp: init: Using DHCP client 'internal'
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2799] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2804] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2808] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2814] device (lo): Activation: starting connection 'lo' (b49ad265-99ce-4f7a-8dd0-0523eb06b6be)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2819] device (eth0): carrier: link connected
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2822] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2826] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2826] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2831] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2836] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2840] device (eth1): carrier: link connected
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2843] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2846] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (93575fce-2dad-5737-84b4-529cbf1b9632) (indicated)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2847] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2850] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2856] device (eth1): Activation: starting connection 'ci-private-network' (93575fce-2dad-5737-84b4-529cbf1b9632)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2861] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 09:37:56 compute-0 systemd[1]: Started Network Manager.
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2868] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2870] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2871] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2873] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2875] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2877] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2878] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2880] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2886] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2888] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2909] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2918] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2932] dhcp4 (eth0): state changed new lease, address=38.102.83.89
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.2937] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3256] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3260] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3261] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3263] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3266] device (lo): Activation: successful, device activated.
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3270] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3273] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3275] device (eth1): Activation: successful, device activated.
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3282] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3283] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3285] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3287] device (eth0): Activation: successful, device activated.
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3291] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 09:37:56 compute-0 NetworkManager[56281]: <info>  [1769852276.3293] manager: startup complete
Jan 31 09:37:56 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 31 09:37:56 compute-0 sudo[56268]: pam_unix(sudo:session): session closed for user root
Jan 31 09:37:56 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 31 09:37:56 compute-0 sudo[56494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsbpdookojjkqgcwgbvzpvzwsflgozbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852276.509667-163-184805833899306/AnsiballZ_dnf.py'
Jan 31 09:37:56 compute-0 sudo[56494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:37:56 compute-0 python3.9[56496]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:38:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:38:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:38:01 compute-0 systemd[1]: Reloading.
Jan 31 09:38:01 compute-0 systemd-rc-local-generator[56547]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:38:01 compute-0 systemd-sysv-generator[56550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:38:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:38:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:38:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:38:01 compute-0 systemd[1]: run-r6be3f769187144ccb29c94f290c122cb.service: Deactivated successfully.
Jan 31 09:38:02 compute-0 sudo[56494]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:02 compute-0 sudo[56953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nudymdavnvqjvimuxjduzijjshpljcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852282.4869683-175-4594244758123/AnsiballZ_stat.py'
Jan 31 09:38:02 compute-0 sudo[56953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:02 compute-0 python3.9[56955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:38:02 compute-0 sudo[56953]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:03 compute-0 sudo[57105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdchebjtlulgbxxubdmptofvciiqasc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852283.092198-184-113968095416457/AnsiballZ_ini_file.py'
Jan 31 09:38:03 compute-0 sudo[57105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:03 compute-0 python3.9[57107]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:03 compute-0 sudo[57105]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:04 compute-0 sudo[57259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhnpodrtzzmcohlrbqswkrtyhgdsgfme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852283.8600533-194-27740121561908/AnsiballZ_ini_file.py'
Jan 31 09:38:04 compute-0 sudo[57259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:04 compute-0 python3.9[57261]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:04 compute-0 sudo[57259]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:04 compute-0 sudo[57411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphouizgqrqnbexrljzwirunjdesiulj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852284.3986902-194-107613463591819/AnsiballZ_ini_file.py'
Jan 31 09:38:04 compute-0 sudo[57411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:04 compute-0 python3.9[57413]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:04 compute-0 sudo[57411]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:05 compute-0 sudo[57563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwbblbigjxarqzsbvqcqwashaysqebl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852285.0088496-209-234273258736086/AnsiballZ_ini_file.py'
Jan 31 09:38:05 compute-0 sudo[57563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:05 compute-0 python3.9[57565]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:05 compute-0 sudo[57563]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:05 compute-0 sudo[57715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbwwkrsfexhotihvhxexmswacltbnqav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852285.497371-209-176074879377367/AnsiballZ_ini_file.py'
Jan 31 09:38:05 compute-0 sudo[57715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:05 compute-0 python3.9[57717]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:05 compute-0 sudo[57715]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:06 compute-0 sudo[57867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofyqmklcmvkhdmjkorojyzdlqmmsyaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852286.0174465-224-159288644676153/AnsiballZ_stat.py'
Jan 31 09:38:06 compute-0 sudo[57867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:06 compute-0 python3.9[57869]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:38:06 compute-0 sudo[57867]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:06 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 09:38:06 compute-0 sudo[57990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfgrhndpdziktkyuftwbgqfowoqdxkdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852286.0174465-224-159288644676153/AnsiballZ_copy.py'
Jan 31 09:38:06 compute-0 sudo[57990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:07 compute-0 python3.9[57992]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852286.0174465-224-159288644676153/.source _original_basename=.sml563np follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:07 compute-0 sudo[57990]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:07 compute-0 sudo[58142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekofhpdameplmkfqpvoumgrwcellcawl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852287.170575-239-173404411307334/AnsiballZ_file.py'
Jan 31 09:38:07 compute-0 sudo[58142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:07 compute-0 python3.9[58144]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:07 compute-0 sudo[58142]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:08 compute-0 sudo[58294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-touuqqknmnzsitjaffeppjywtdgoylsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852287.7658138-247-170042097892323/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 09:38:08 compute-0 sudo[58294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:08 compute-0 python3.9[58296]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 09:38:08 compute-0 sudo[58294]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:09 compute-0 sudo[58446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apeiyhdzoxmqgztyidabdivomwujprve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852288.609742-256-2054727008779/AnsiballZ_file.py'
Jan 31 09:38:09 compute-0 sudo[58446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:09 compute-0 python3.9[58448]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:09 compute-0 sudo[58446]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:10 compute-0 sudo[58598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blimwhrnidsxakitrzrhunmksmamveqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852289.8095844-266-116469733662129/AnsiballZ_stat.py'
Jan 31 09:38:10 compute-0 sudo[58598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:10 compute-0 sudo[58598]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:10 compute-0 sudo[58721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknnexuszxxomdbgcheyxtdebmaotztl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852289.8095844-266-116469733662129/AnsiballZ_copy.py'
Jan 31 09:38:10 compute-0 sudo[58721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:10 compute-0 sudo[58721]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:11 compute-0 sudo[58873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dolkmrpuehzwqbxhisamrkalxtvmpigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852290.8438883-281-167603701416864/AnsiballZ_slurp.py'
Jan 31 09:38:11 compute-0 sudo[58873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:11 compute-0 python3.9[58875]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 09:38:11 compute-0 sudo[58873]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:12 compute-0 sudo[59048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnjqgmnuirbpmtduhmpuumodixkrrhlc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852291.6958878-290-270792987219370/async_wrapper.py j152646803053 300 /home/zuul/.ansible/tmp/ansible-tmp-1769852291.6958878-290-270792987219370/AnsiballZ_edpm_os_net_config.py _'
Jan 31 09:38:12 compute-0 sudo[59048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:12 compute-0 ansible-async_wrapper.py[59050]: Invoked with j152646803053 300 /home/zuul/.ansible/tmp/ansible-tmp-1769852291.6958878-290-270792987219370/AnsiballZ_edpm_os_net_config.py _
Jan 31 09:38:12 compute-0 ansible-async_wrapper.py[59053]: Starting module and watcher
Jan 31 09:38:12 compute-0 ansible-async_wrapper.py[59053]: Start watching 59054 (300)
Jan 31 09:38:12 compute-0 ansible-async_wrapper.py[59054]: Start module (59054)
Jan 31 09:38:12 compute-0 ansible-async_wrapper.py[59050]: Return async_wrapper task started.
Jan 31 09:38:12 compute-0 sudo[59048]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:12 compute-0 python3.9[59055]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 09:38:13 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 09:38:13 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 09:38:13 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 09:38:13 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 09:38:13 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.3684] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.3699] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4177] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4179] audit: op="connection-add" uuid="f01d1489-0702-4b7c-ba31-c722392d94f6" name="br-ex-br" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4194] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4195] audit: op="connection-add" uuid="6bd02b45-1dee-4150-bcaa-56b532ff67a7" name="br-ex-port" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4212] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4214] audit: op="connection-add" uuid="a8867e66-9521-4cd6-9eb7-cffca56504cb" name="eth1-port" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4226] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4228] audit: op="connection-add" uuid="e0cc0ac3-3b42-4300-b9f1-2750db34d835" name="vlan20-port" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4239] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4241] audit: op="connection-add" uuid="4fe02b22-5977-4aab-988e-b4fababbb9db" name="vlan21-port" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4256] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4258] audit: op="connection-add" uuid="c17d7f9b-9ed6-4a47-a83f-0faea04725ff" name="vlan22-port" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4279] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4298] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4301] audit: op="connection-add" uuid="f9ef5a22-d7bc-4e11-99ba-ee5efbfcc71f" name="br-ex-if" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4328] audit: op="connection-update" uuid="93575fce-2dad-5737-84b4-529cbf1b9632" name="ci-private-network" args="ovs-external-ids.data,ovs-interface.type,ipv4.routing-rules,ipv4.method,ipv4.routes,ipv4.addresses,ipv4.never-default,ipv4.dns,ipv6.routing-rules,ipv6.method,ipv6.routes,ipv6.addresses,ipv6.addr-gen-mode,ipv6.dns,connection.master,connection.slave-type,connection.timestamp,connection.controller,connection.port-type" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4350] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4353] audit: op="connection-add" uuid="37a95879-93d3-438c-a7dc-bb03fefb2a7f" name="vlan20-if" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4369] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4371] audit: op="connection-add" uuid="18657d06-4f8c-430b-a0c8-1908530fa2e4" name="vlan21-if" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4389] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4391] audit: op="connection-add" uuid="ae444b09-84d4-4345-ba3f-0e7831ddd9d2" name="vlan22-if" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4404] audit: op="connection-delete" uuid="2a9b79e5-8400-3fb4-adea-7ac6975b5e74" name="Wired connection 1" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4416] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4418] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4425] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4429] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f01d1489-0702-4b7c-ba31-c722392d94f6)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4429] audit: op="connection-activate" uuid="f01d1489-0702-4b7c-ba31-c722392d94f6" name="br-ex-br" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4431] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4432] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4437] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4441] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6bd02b45-1dee-4150-bcaa-56b532ff67a7)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4443] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4444] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4449] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4454] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a8867e66-9521-4cd6-9eb7-cffca56504cb)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4456] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4457] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4462] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4467] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e0cc0ac3-3b42-4300-b9f1-2750db34d835)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4468] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4469] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4474] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4479] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (4fe02b22-5977-4aab-988e-b4fababbb9db)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4481] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4482] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4487] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4492] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c17d7f9b-9ed6-4a47-a83f-0faea04725ff)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4493] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4496] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4498] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4505] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4506] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4510] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4516] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f9ef5a22-d7bc-4e11-99ba-ee5efbfcc71f)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4517] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4523] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4526] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4527] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4529] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4554] device (eth1): disconnecting for new activation request.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4555] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4559] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4561] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4562] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4565] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4567] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4570] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4574] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (37a95879-93d3-438c-a7dc-bb03fefb2a7f)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4575] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4578] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4580] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4581] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4584] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4585] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4588] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4594] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (18657d06-4f8c-430b-a0c8-1908530fa2e4)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4595] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4599] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4602] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4604] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4608] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <warn>  [1769852294.4609] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4613] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4619] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (ae444b09-84d4-4345-ba3f-0e7831ddd9d2)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4620] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4624] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4626] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4628] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4630] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4645] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4649] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4653] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4657] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4665] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4669] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4675] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4679] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4681] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4685] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4688] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4690] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4691] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4697] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4700] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 kernel: Timeout policy base is empty
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4702] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4703] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4706] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4710] dhcp4 (eth0): canceled DHCP transaction
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4710] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4710] dhcp4 (eth0): state changed no lease
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4711] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 09:38:14 compute-0 systemd-udevd[59060]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4720] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4722] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59056 uid=0 result="fail" reason="Device is not activated"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4728] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4763] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4771] device (eth1): disconnecting for new activation request.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4771] audit: op="connection-activate" uuid="93575fce-2dad-5737-84b4-529cbf1b9632" name="ci-private-network" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4773] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4775] dhcp4 (eth0): state changed new lease, address=38.102.83.89
Jan 31 09:38:14 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4840] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59056 uid=0 result="success"
Jan 31 09:38:14 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.4871] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 09:38:14 compute-0 kernel: br-ex: entered promiscuous mode
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5008] device (eth1): Activation: starting connection 'ci-private-network' (93575fce-2dad-5737-84b4-529cbf1b9632)
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5024] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5028] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5039] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5040] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5042] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5043] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5044] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5046] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5054] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5061] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5064] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5068] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5072] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5075] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5078] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5082] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5085] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5089] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5092] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5096] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5099] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 kernel: vlan22: entered promiscuous mode
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5104] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5108] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 systemd-udevd[59062]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5124] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5134] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 kernel: vlan20: entered promiscuous mode
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5182] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5184] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5187] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5193] device (eth1): Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5197] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5201] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 kernel: vlan21: entered promiscuous mode
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5219] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5236] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5278] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5290] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5304] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5305] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5307] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5311] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5320] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5324] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5328] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5334] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5375] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5376] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 09:38:14 compute-0 NetworkManager[56281]: <info>  [1769852294.5380] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 09:38:15 compute-0 NetworkManager[56281]: <info>  [1769852295.6537] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59056 uid=0 result="success"
Jan 31 09:38:15 compute-0 NetworkManager[56281]: <info>  [1769852295.8071] checkpoint[0x56321969c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 09:38:15 compute-0 NetworkManager[56281]: <info>  [1769852295.8074] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.0587] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.0595] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 sudo[59388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jimmlqxxkmpwakbycrornmxsrjdbpacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852295.66237-290-175011029610422/AnsiballZ_async_status.py'
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.1929] audit: op="networking-control" arg="global-dns-configuration" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 sudo[59388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.1961] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.1989] audit: op="networking-control" arg="global-dns-configuration" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.2003] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.3094] checkpoint[0x56321969ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 09:38:16 compute-0 NetworkManager[56281]: <info>  [1769852296.3097] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59056 uid=0 result="success"
Jan 31 09:38:16 compute-0 ansible-async_wrapper.py[59054]: Module complete (59054)
Jan 31 09:38:16 compute-0 python3.9[59390]: ansible-ansible.legacy.async_status Invoked with jid=j152646803053.59050 mode=status _async_dir=/root/.ansible_async
Jan 31 09:38:16 compute-0 sudo[59388]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:16 compute-0 sudo[59492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onpydzlgagidlrkqdaqapdlnvkfkikqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852295.66237-290-175011029610422/AnsiballZ_async_status.py'
Jan 31 09:38:16 compute-0 sudo[59492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:16 compute-0 python3.9[59494]: ansible-ansible.legacy.async_status Invoked with jid=j152646803053.59050 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 09:38:16 compute-0 sudo[59492]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:17 compute-0 sudo[59644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuyiuabeibdxqaazhpxqhajlvsrldbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852297.0377073-312-31220474254158/AnsiballZ_stat.py'
Jan 31 09:38:17 compute-0 sudo[59644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:17 compute-0 python3.9[59646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:38:17 compute-0 sudo[59644]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:17 compute-0 ansible-async_wrapper.py[59053]: Done in kid B.
Jan 31 09:38:17 compute-0 sudo[59767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysiezsyeatuhtcascjvfrczgbpvvlmsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852297.0377073-312-31220474254158/AnsiballZ_copy.py'
Jan 31 09:38:17 compute-0 sudo[59767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:17 compute-0 python3.9[59769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852297.0377073-312-31220474254158/.source.returncode _original_basename=.076ahg8x follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:17 compute-0 sudo[59767]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:18 compute-0 sudo[59919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htaxjbbbqavsoyspxynhjrpfonvvliqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852298.1635587-328-258272468582311/AnsiballZ_stat.py'
Jan 31 09:38:18 compute-0 sudo[59919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:18 compute-0 python3.9[59921]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:38:18 compute-0 sudo[59919]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:18 compute-0 sudo[60042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkauqtdcggeaowsmxqyatbiehytvwacf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852298.1635587-328-258272468582311/AnsiballZ_copy.py'
Jan 31 09:38:18 compute-0 sudo[60042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:19 compute-0 python3.9[60044]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852298.1635587-328-258272468582311/.source.cfg _original_basename=.3fvlzz1i follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:19 compute-0 sudo[60042]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:19 compute-0 sudo[60194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cthzjisuwdkqirtusfxiezhibeotpidd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852299.219224-343-258423147541933/AnsiballZ_systemd.py'
Jan 31 09:38:19 compute-0 sudo[60194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:19 compute-0 python3.9[60196]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:38:19 compute-0 systemd[1]: Reloading Network Manager...
Jan 31 09:38:19 compute-0 NetworkManager[56281]: <info>  [1769852299.9064] audit: op="reload" arg="0" pid=60201 uid=0 result="success"
Jan 31 09:38:19 compute-0 NetworkManager[56281]: <info>  [1769852299.9070] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 09:38:19 compute-0 systemd[1]: Reloaded Network Manager.
Jan 31 09:38:19 compute-0 sudo[60194]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:20 compute-0 sshd-session[52288]: Connection closed by 192.168.122.30 port 44286
Jan 31 09:38:20 compute-0 sshd-session[52285]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:38:20 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 09:38:20 compute-0 systemd[1]: session-11.scope: Consumed 44.069s CPU time.
Jan 31 09:38:20 compute-0 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Jan 31 09:38:20 compute-0 systemd-logind[795]: Removed session 11.
Jan 31 09:38:25 compute-0 sshd-session[60231]: Accepted publickey for zuul from 192.168.122.30 port 57920 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:38:25 compute-0 systemd-logind[795]: New session 12 of user zuul.
Jan 31 09:38:25 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 31 09:38:25 compute-0 sshd-session[60231]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:38:26 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 09:38:26 compute-0 python3.9[60387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:38:27 compute-0 python3.9[60541]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:38:28 compute-0 python3.9[60730]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:38:29 compute-0 sshd-session[60234]: Connection closed by 192.168.122.30 port 57920
Jan 31 09:38:29 compute-0 sshd-session[60231]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:38:29 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 09:38:29 compute-0 systemd[1]: session-12.scope: Consumed 2.095s CPU time.
Jan 31 09:38:29 compute-0 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Jan 31 09:38:29 compute-0 systemd-logind[795]: Removed session 12.
Jan 31 09:38:29 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 09:38:34 compute-0 sshd-session[60758]: Accepted publickey for zuul from 192.168.122.30 port 46786 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:38:34 compute-0 systemd-logind[795]: New session 13 of user zuul.
Jan 31 09:38:34 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 31 09:38:34 compute-0 sshd-session[60758]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:38:35 compute-0 python3.9[60911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:38:36 compute-0 python3.9[61066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:38:37 compute-0 sudo[61220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scbrkafbmablcjruopczccajgzwmuqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852317.1779854-35-69014638120291/AnsiballZ_setup.py'
Jan 31 09:38:37 compute-0 sudo[61220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:37 compute-0 python3.9[61222]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:38:37 compute-0 sudo[61220]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:38 compute-0 sudo[61304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddydzvjcyorcllrulasltsypteqwubf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852317.1779854-35-69014638120291/AnsiballZ_dnf.py'
Jan 31 09:38:38 compute-0 sudo[61304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:38 compute-0 python3.9[61306]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:38:40 compute-0 sudo[61304]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:40 compute-0 sudo[61458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrgjnltkwipytqiuwhvymoigwvfteur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852320.359325-47-73781315362888/AnsiballZ_setup.py'
Jan 31 09:38:40 compute-0 sudo[61458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:40 compute-0 python3.9[61460]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:38:41 compute-0 sudo[61458]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:41 compute-0 sudo[61649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtcfsusldllizcedhsayarxdbbjafwfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852321.3378565-58-52649290865977/AnsiballZ_file.py'
Jan 31 09:38:41 compute-0 sudo[61649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:42 compute-0 python3.9[61651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:42 compute-0 sudo[61649]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:42 compute-0 sudo[61802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bymumqajkvrptmtxancdownfxaiufvtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852322.1878235-66-63673418221709/AnsiballZ_command.py'
Jan 31 09:38:42 compute-0 sudo[61802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:42 compute-0 python3.9[61804]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:38:42 compute-0 sudo[61802]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:43 compute-0 sudo[61964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzofcgpmbvyqzhmwhmkztybrudrpqsbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852323.216532-74-160087731003321/AnsiballZ_stat.py'
Jan 31 09:38:43 compute-0 sudo[61964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:43 compute-0 python3.9[61966]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:38:43 compute-0 sudo[61964]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:44 compute-0 sudo[62042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqwrvyufqzqqovixzoychwwvyhwmgtmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852323.216532-74-160087731003321/AnsiballZ_file.py'
Jan 31 09:38:44 compute-0 sudo[62042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:44 compute-0 python3.9[62044]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:38:44 compute-0 sudo[62042]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:44 compute-0 sudo[62194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhyzbjbugfbulvnxteaptcsvtlyhxqdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852324.399495-86-147928691168863/AnsiballZ_stat.py'
Jan 31 09:38:44 compute-0 sudo[62194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:44 compute-0 python3.9[62196]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:38:44 compute-0 sudo[62194]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:45 compute-0 sudo[62272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvujwqtvsmfziqlxlitrtxhscwqnialr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852324.399495-86-147928691168863/AnsiballZ_file.py'
Jan 31 09:38:45 compute-0 sudo[62272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:45 compute-0 python3.9[62274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:38:45 compute-0 sudo[62272]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:45 compute-0 sudo[62425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmujaorfaarkqhajdqgpgyosxvudyjkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852325.4077902-99-127642124853908/AnsiballZ_ini_file.py'
Jan 31 09:38:45 compute-0 sudo[62425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:45 compute-0 python3.9[62427]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:38:46 compute-0 sudo[62425]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:46 compute-0 sudo[62577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpnupuntklwxvmswlysezcopsrxqxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852326.1323678-99-164064176473117/AnsiballZ_ini_file.py'
Jan 31 09:38:46 compute-0 sudo[62577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:46 compute-0 python3.9[62579]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:38:46 compute-0 sudo[62577]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:46 compute-0 sudo[62729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouiqiwhuppolcuuldvtuqjnhvtnpenra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852326.7677433-99-63807872540827/AnsiballZ_ini_file.py'
Jan 31 09:38:46 compute-0 sudo[62729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:47 compute-0 python3.9[62731]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:38:47 compute-0 sudo[62729]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:47 compute-0 sudo[62881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqliyvtlubjqlfmjzjfefdenipzvcpao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852327.3011081-99-248551063063479/AnsiballZ_ini_file.py'
Jan 31 09:38:47 compute-0 sudo[62881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:47 compute-0 python3.9[62883]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:38:47 compute-0 sudo[62881]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:48 compute-0 sudo[63033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hruigspyqrvkmeoboynufswnjhcydcso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852328.2235215-130-201250889106109/AnsiballZ_dnf.py'
Jan 31 09:38:48 compute-0 sudo[63033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:48 compute-0 python3.9[63035]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:38:50 compute-0 sudo[63033]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:50 compute-0 sudo[63186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylbkuwiihuwahuyljhbufpaxjgnbpctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852330.4497905-141-50436333332296/AnsiballZ_setup.py'
Jan 31 09:38:50 compute-0 sudo[63186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:51 compute-0 python3.9[63188]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:38:51 compute-0 sudo[63186]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:51 compute-0 sudo[63340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhutiasadkqcbnlnatajymwakjbnkuwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852331.1773126-149-150429613845231/AnsiballZ_stat.py'
Jan 31 09:38:51 compute-0 sudo[63340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:51 compute-0 python3.9[63342]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:38:51 compute-0 sudo[63340]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:51 compute-0 sudo[63492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipvyvepsxvntjnstezvulptzjckeylc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852331.7497146-158-177350348619256/AnsiballZ_stat.py'
Jan 31 09:38:51 compute-0 sudo[63492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:52 compute-0 python3.9[63494]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:38:52 compute-0 sudo[63492]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:52 compute-0 sudo[63644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrphsfuuzusvaeisblnslufeldchfms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852332.4267576-168-16275243861973/AnsiballZ_command.py'
Jan 31 09:38:52 compute-0 sudo[63644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:52 compute-0 python3.9[63646]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:38:52 compute-0 sudo[63644]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:53 compute-0 sudo[63797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abguedthmogocfmbypxniqjvrezbnwul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852333.2897756-178-44777461002599/AnsiballZ_service_facts.py'
Jan 31 09:38:53 compute-0 sudo[63797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:53 compute-0 python3.9[63799]: ansible-service_facts Invoked
Jan 31 09:38:53 compute-0 network[63816]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:38:53 compute-0 network[63817]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:38:53 compute-0 network[63818]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:38:55 compute-0 sudo[63797]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:56 compute-0 sudo[64101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktmctcnnziymgjydcwxiukmwkdnuyho ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769852336.4642127-193-37644855833556/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769852336.4642127-193-37644855833556/args'
Jan 31 09:38:56 compute-0 sudo[64101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:56 compute-0 sudo[64101]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:57 compute-0 sudo[64268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgpjbxhoddgxyeabjgmqnfpnwrtlsmcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852337.2347934-204-72258094784580/AnsiballZ_dnf.py'
Jan 31 09:38:57 compute-0 sudo[64268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:38:57 compute-0 python3.9[64270]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:38:58 compute-0 sudo[64268]: pam_unix(sudo:session): session closed for user root
Jan 31 09:38:59 compute-0 sudo[64421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljyjzyjkvoulycpqpmzyaowfkfzvoax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852339.2390816-217-106503308904014/AnsiballZ_package_facts.py'
Jan 31 09:38:59 compute-0 sudo[64421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:00 compute-0 python3.9[64423]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 09:39:00 compute-0 sudo[64421]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:01 compute-0 sudo[64573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvxkzlormmvtwkaqoklgxmkhzwksgja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852340.9056396-227-207272110889550/AnsiballZ_stat.py'
Jan 31 09:39:01 compute-0 sudo[64573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:01 compute-0 python3.9[64575]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:01 compute-0 sudo[64573]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:02 compute-0 sudo[64698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuygyxrxcbcafvtrsmpwixpohgvpzhdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852340.9056396-227-207272110889550/AnsiballZ_copy.py'
Jan 31 09:39:02 compute-0 sudo[64698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:02 compute-0 python3.9[64700]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852340.9056396-227-207272110889550/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:02 compute-0 sudo[64698]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:03 compute-0 sudo[64852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavzqtaowxohhgdqldcpaillrfyyllmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852342.7471635-242-107585949811465/AnsiballZ_stat.py'
Jan 31 09:39:03 compute-0 sudo[64852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:03 compute-0 python3.9[64854]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:03 compute-0 sudo[64852]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:03 compute-0 sudo[64977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozpladquoflnvbjfsgwyleiymuqyoram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852342.7471635-242-107585949811465/AnsiballZ_copy.py'
Jan 31 09:39:03 compute-0 sudo[64977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:03 compute-0 python3.9[64979]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852342.7471635-242-107585949811465/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:03 compute-0 sudo[64977]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:05 compute-0 sudo[65131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atcnfufvmcszglikfsmqpkqruwtgksls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852344.5921047-263-52445070857103/AnsiballZ_lineinfile.py'
Jan 31 09:39:05 compute-0 sudo[65131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:05 compute-0 python3.9[65133]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:05 compute-0 sudo[65131]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:06 compute-0 sudo[65285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqtggtvqforfipkmizieyhdzpcabesrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852346.0689688-278-252471806035503/AnsiballZ_setup.py'
Jan 31 09:39:06 compute-0 sudo[65285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:06 compute-0 python3.9[65287]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:39:06 compute-0 sudo[65285]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:07 compute-0 sudo[65369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoiahnyselrcrkqnbymbhetcpqcmrabr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852346.0689688-278-252471806035503/AnsiballZ_systemd.py'
Jan 31 09:39:07 compute-0 sudo[65369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:07 compute-0 python3.9[65371]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:39:07 compute-0 sudo[65369]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:08 compute-0 sudo[65523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcbrxuarxuyeagmqayrgnpwliftclggx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852348.2666748-294-211787665457767/AnsiballZ_setup.py'
Jan 31 09:39:08 compute-0 sudo[65523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:08 compute-0 python3.9[65525]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:39:09 compute-0 sudo[65523]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:09 compute-0 sudo[65607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjjxeandqmnkcplqzlzpxfbdynjplnqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852348.2666748-294-211787665457767/AnsiballZ_systemd.py'
Jan 31 09:39:09 compute-0 sudo[65607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:09 compute-0 python3.9[65609]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:39:09 compute-0 chronyd[800]: chronyd exiting
Jan 31 09:39:09 compute-0 systemd[1]: Stopping NTP client/server...
Jan 31 09:39:09 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 09:39:09 compute-0 systemd[1]: Stopped NTP client/server.
Jan 31 09:39:09 compute-0 systemd[1]: Starting NTP client/server...
Jan 31 09:39:09 compute-0 chronyd[65618]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 09:39:09 compute-0 chronyd[65618]: Frequency -25.061 +/- 0.419 ppm read from /var/lib/chrony/drift
Jan 31 09:39:09 compute-0 chronyd[65618]: Loaded seccomp filter (level 2)
Jan 31 09:39:09 compute-0 systemd[1]: Started NTP client/server.
Jan 31 09:39:09 compute-0 sudo[65607]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:10 compute-0 sshd-session[60761]: Connection closed by 192.168.122.30 port 46786
Jan 31 09:39:10 compute-0 sshd-session[60758]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:39:10 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 09:39:10 compute-0 systemd[1]: session-13.scope: Consumed 21.815s CPU time.
Jan 31 09:39:10 compute-0 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Jan 31 09:39:10 compute-0 systemd-logind[795]: Removed session 13.
Jan 31 09:39:18 compute-0 sshd-session[65644]: Accepted publickey for zuul from 192.168.122.30 port 51074 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:39:18 compute-0 systemd-logind[795]: New session 14 of user zuul.
Jan 31 09:39:18 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 31 09:39:18 compute-0 sshd-session[65644]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:39:19 compute-0 python3.9[65797]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:39:20 compute-0 sudo[65951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geqihkyeqjfkzxrzpcvzrzszjdutpldv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852360.058257-28-265165737082842/AnsiballZ_file.py'
Jan 31 09:39:20 compute-0 sudo[65951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:20 compute-0 python3.9[65953]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:20 compute-0 sudo[65951]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:21 compute-0 sudo[66126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcgnpickacmtqlfozszjdlaatiimroqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852360.8252132-36-227697459430056/AnsiballZ_stat.py'
Jan 31 09:39:21 compute-0 sudo[66126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:21 compute-0 python3.9[66128]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:21 compute-0 sudo[66126]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:21 compute-0 sudo[66204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghepujfbsmocaahcrpodckhchfqjlbuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852360.8252132-36-227697459430056/AnsiballZ_file.py'
Jan 31 09:39:21 compute-0 sudo[66204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:21 compute-0 python3.9[66206]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.nkpqkytz recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:21 compute-0 sudo[66204]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:22 compute-0 sudo[66356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkvaqbhjxsmukhflejscpleonltsiknx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852362.1818163-56-259703656115975/AnsiballZ_stat.py'
Jan 31 09:39:22 compute-0 sudo[66356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:22 compute-0 python3.9[66358]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:22 compute-0 sudo[66356]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:22 compute-0 sudo[66479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjcwlvmcllfyiqgerpdjqqlesxfbyzcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852362.1818163-56-259703656115975/AnsiballZ_copy.py'
Jan 31 09:39:22 compute-0 sudo[66479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:23 compute-0 python3.9[66481]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852362.1818163-56-259703656115975/.source _original_basename=.miufpzni follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:23 compute-0 sudo[66479]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:23 compute-0 sudo[66631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovloitiskltxbmbhsyuipzhizejtysdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852363.3276715-72-69136891183630/AnsiballZ_file.py'
Jan 31 09:39:23 compute-0 sudo[66631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:23 compute-0 python3.9[66633]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:39:24 compute-0 sudo[66631]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:24 compute-0 sudo[66783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuozoxjdkzmaukeotzrzikudzifavlzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852364.1727862-80-162671365772863/AnsiballZ_stat.py'
Jan 31 09:39:24 compute-0 sudo[66783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:24 compute-0 python3.9[66785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:24 compute-0 sudo[66783]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:24 compute-0 sudo[66906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktgxczxqgfncmhgvhmwlhftaxfgjofgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852364.1727862-80-162671365772863/AnsiballZ_copy.py'
Jan 31 09:39:24 compute-0 sudo[66906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:25 compute-0 python3.9[66908]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852364.1727862-80-162671365772863/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:39:25 compute-0 sudo[66906]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:25 compute-0 sudo[67058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cohbmugjoliupnaigtbpymgynytyirvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852365.230553-80-28730596652792/AnsiballZ_stat.py'
Jan 31 09:39:25 compute-0 sudo[67058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:25 compute-0 python3.9[67060]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:25 compute-0 sudo[67058]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:25 compute-0 sudo[67181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcybbtfbxmnofwfvqbfejiqzudhsgnet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852365.230553-80-28730596652792/AnsiballZ_copy.py'
Jan 31 09:39:25 compute-0 sudo[67181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:26 compute-0 python3.9[67183]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852365.230553-80-28730596652792/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:39:26 compute-0 sudo[67181]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:26 compute-0 sudo[67333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwryhlhjmfxqmchidxpkruckcoeoqxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852366.326211-109-258502840142488/AnsiballZ_file.py'
Jan 31 09:39:26 compute-0 sudo[67333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:26 compute-0 python3.9[67335]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:26 compute-0 sudo[67333]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:27 compute-0 sudo[67485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atihbnmkmdunbrflkafcpyzewokmltlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852366.8952253-117-260165025371655/AnsiballZ_stat.py'
Jan 31 09:39:27 compute-0 sudo[67485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:27 compute-0 python3.9[67487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:27 compute-0 sudo[67485]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:27 compute-0 sudo[67608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drhurilqfxujhcyojnpuupdkjbhnesmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852366.8952253-117-260165025371655/AnsiballZ_copy.py'
Jan 31 09:39:27 compute-0 sudo[67608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:27 compute-0 python3.9[67610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852366.8952253-117-260165025371655/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:27 compute-0 sudo[67608]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:28 compute-0 sudo[67760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwjkrgemiywhvzouewyplkyfnfgffrtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852367.95386-132-197928676148673/AnsiballZ_stat.py'
Jan 31 09:39:28 compute-0 sudo[67760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:28 compute-0 python3.9[67762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:28 compute-0 sudo[67760]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:28 compute-0 sudo[67883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvogytjxtpjedasbvzmsvwdoefovpedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852367.95386-132-197928676148673/AnsiballZ_copy.py'
Jan 31 09:39:28 compute-0 sudo[67883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:29 compute-0 python3.9[67885]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852367.95386-132-197928676148673/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:29 compute-0 sudo[67883]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:29 compute-0 sudo[68035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igromnxhqevlrrjyfhzmsdwcsxxbxtiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852369.3089402-147-253912760956797/AnsiballZ_systemd.py'
Jan 31 09:39:29 compute-0 sudo[68035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:30 compute-0 python3.9[68037]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:39:30 compute-0 systemd[1]: Reloading.
Jan 31 09:39:30 compute-0 systemd-rc-local-generator[68063]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:39:30 compute-0 systemd-sysv-generator[68066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:39:30 compute-0 systemd[1]: Reloading.
Jan 31 09:39:30 compute-0 systemd-sysv-generator[68106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:39:30 compute-0 systemd-rc-local-generator[68102]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:39:30 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 09:39:30 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 09:39:30 compute-0 sudo[68035]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:30 compute-0 sudo[68262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqdpoitjtvsrodshcqghkytcgujtktq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852370.7638235-155-87314420934338/AnsiballZ_stat.py'
Jan 31 09:39:30 compute-0 sudo[68262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:31 compute-0 python3.9[68264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:31 compute-0 sudo[68262]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:31 compute-0 sudo[68385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exofeedfgimqosxvswzapjlpngfnszcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852370.7638235-155-87314420934338/AnsiballZ_copy.py'
Jan 31 09:39:31 compute-0 sudo[68385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:31 compute-0 python3.9[68387]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852370.7638235-155-87314420934338/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:31 compute-0 sudo[68385]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:32 compute-0 sudo[68537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjijkojcmbtowqaakhqscykkyfbbjknf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852371.869213-170-156912424728146/AnsiballZ_stat.py'
Jan 31 09:39:32 compute-0 sudo[68537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:32 compute-0 python3.9[68539]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:32 compute-0 sudo[68537]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:32 compute-0 sudo[68660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvlvuxocymbbaxeaigthiqxausvfpccs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852371.869213-170-156912424728146/AnsiballZ_copy.py'
Jan 31 09:39:32 compute-0 sudo[68660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:32 compute-0 python3.9[68662]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852371.869213-170-156912424728146/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:33 compute-0 sudo[68660]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:33 compute-0 sudo[68812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrgmxdnotoecocpdgrxujnmihwjkxfze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852373.1481667-185-35286124119733/AnsiballZ_systemd.py'
Jan 31 09:39:33 compute-0 sudo[68812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:33 compute-0 python3.9[68814]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:39:33 compute-0 systemd[1]: Reloading.
Jan 31 09:39:33 compute-0 systemd-rc-local-generator[68841]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:39:33 compute-0 systemd-sysv-generator[68845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:39:33 compute-0 systemd[1]: Reloading.
Jan 31 09:39:33 compute-0 systemd-sysv-generator[68881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:39:33 compute-0 systemd-rc-local-generator[68878]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:39:34 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 09:39:34 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 09:39:34 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 09:39:34 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 09:39:34 compute-0 sudo[68812]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:34 compute-0 python3.9[69040]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:39:34 compute-0 network[69057]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:39:34 compute-0 network[69058]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:39:34 compute-0 network[69059]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:39:37 compute-0 sudo[69319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdmbkuzfcxckyojilrwlddcuavnmihoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852376.926892-201-133583153009323/AnsiballZ_systemd.py'
Jan 31 09:39:37 compute-0 sudo[69319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:37 compute-0 python3.9[69321]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:39:37 compute-0 systemd[1]: Reloading.
Jan 31 09:39:37 compute-0 systemd-rc-local-generator[69346]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:39:37 compute-0 systemd-sysv-generator[69353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:39:37 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 09:39:37 compute-0 iptables.init[69362]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 09:39:37 compute-0 iptables.init[69362]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 09:39:37 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 09:39:37 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 09:39:38 compute-0 sudo[69319]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:38 compute-0 sudo[69556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcwugsobomwvzqvedcktpjvjkqiwrkjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852378.1386929-201-166380310156574/AnsiballZ_systemd.py'
Jan 31 09:39:38 compute-0 sudo[69556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:38 compute-0 python3.9[69558]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:39:38 compute-0 sudo[69556]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:39 compute-0 sudo[69710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-magmzgqcznovlldfadfuiubokphxaypi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852378.9974616-217-278874206418273/AnsiballZ_systemd.py'
Jan 31 09:39:39 compute-0 sudo[69710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:39 compute-0 python3.9[69712]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:39:39 compute-0 systemd[1]: Reloading.
Jan 31 09:39:39 compute-0 systemd-rc-local-generator[69736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:39:39 compute-0 systemd-sysv-generator[69743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:39:39 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 31 09:39:39 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 31 09:39:39 compute-0 sudo[69710]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:40 compute-0 sudo[69902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyydwalxijsrnenedmlkyakmgvzovqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852379.9651918-225-137257918484584/AnsiballZ_command.py'
Jan 31 09:39:40 compute-0 sudo[69902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:40 compute-0 python3.9[69904]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:39:40 compute-0 sudo[69902]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:41 compute-0 sudo[70055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fprcshfiiaizednupjcijtpsgkztnwsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852381.0819373-239-35477400438077/AnsiballZ_stat.py'
Jan 31 09:39:41 compute-0 sudo[70055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:41 compute-0 python3.9[70057]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:41 compute-0 sudo[70055]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:41 compute-0 sudo[70180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqbxwvpparppowlpemqdlqmvoinsypm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852381.0819373-239-35477400438077/AnsiballZ_copy.py'
Jan 31 09:39:41 compute-0 sudo[70180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:41 compute-0 python3.9[70182]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852381.0819373-239-35477400438077/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:41 compute-0 sudo[70180]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:42 compute-0 sudo[70333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koscseaovpsjqmbroyejidsybliurefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852382.1276343-254-103132157069357/AnsiballZ_systemd.py'
Jan 31 09:39:42 compute-0 sudo[70333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:42 compute-0 python3.9[70335]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:39:42 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 09:39:42 compute-0 sshd[1005]: Received SIGHUP; restarting.
Jan 31 09:39:42 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 09:39:42 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Jan 31 09:39:42 compute-0 sshd[1005]: Server listening on :: port 22.
Jan 31 09:39:42 compute-0 sudo[70333]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:43 compute-0 sudo[70489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlqcysrmtlroxhxbcvgyizqhpbpzrpcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852382.8736129-262-31431206453598/AnsiballZ_file.py'
Jan 31 09:39:43 compute-0 sudo[70489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:43 compute-0 python3.9[70491]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:43 compute-0 sudo[70489]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:43 compute-0 sudo[70641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvlbjbrpgowgudijnheexeezibjymwsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852383.4411182-270-128321518972964/AnsiballZ_stat.py'
Jan 31 09:39:43 compute-0 sudo[70641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:43 compute-0 python3.9[70643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:43 compute-0 sudo[70641]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:44 compute-0 sudo[70764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xulcicxogeatwlcwusjmzhyyuveblslu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852383.4411182-270-128321518972964/AnsiballZ_copy.py'
Jan 31 09:39:44 compute-0 sudo[70764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:44 compute-0 python3.9[70766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852383.4411182-270-128321518972964/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:44 compute-0 sudo[70764]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:44 compute-0 sudo[70916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrassdeuwjtgfxlgfctarkhnjqfppza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852384.5751083-288-95743492743400/AnsiballZ_timezone.py'
Jan 31 09:39:44 compute-0 sudo[70916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:45 compute-0 python3.9[70918]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 09:39:45 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 09:39:45 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 09:39:45 compute-0 sudo[70916]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:45 compute-0 sudo[71072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sskrckxqvnzuztrdzzzwghwgfqdhthms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852385.499455-297-240056186363690/AnsiballZ_file.py'
Jan 31 09:39:45 compute-0 sudo[71072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:45 compute-0 python3.9[71074]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:45 compute-0 sudo[71072]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:46 compute-0 sudo[71224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxieriladvjoedmipaxdnkyjojpnoqga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852386.071609-305-213094231220459/AnsiballZ_stat.py'
Jan 31 09:39:46 compute-0 sudo[71224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:46 compute-0 python3.9[71226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:46 compute-0 sudo[71224]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:46 compute-0 sudo[71347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmsdthvqmiwywuxhwlzinalcuqnmzxsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852386.071609-305-213094231220459/AnsiballZ_copy.py'
Jan 31 09:39:46 compute-0 sudo[71347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:46 compute-0 python3.9[71349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852386.071609-305-213094231220459/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:46 compute-0 sudo[71347]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:47 compute-0 sudo[71499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eilndrettxfstfnapxzkqckbgwtjvrjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852387.0488782-320-261670065916113/AnsiballZ_stat.py'
Jan 31 09:39:47 compute-0 sudo[71499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:47 compute-0 python3.9[71501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:47 compute-0 sudo[71499]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:47 compute-0 sudo[71622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdbjpphmihoowprsosvhamgpfxwxoyts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852387.0488782-320-261670065916113/AnsiballZ_copy.py'
Jan 31 09:39:47 compute-0 sudo[71622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:47 compute-0 python3.9[71624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852387.0488782-320-261670065916113/.source.yaml _original_basename=.7wvn55id follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:47 compute-0 sudo[71622]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:48 compute-0 sudo[71774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igumrvsqiblkstnhuadmibkvlyaworew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852388.0820603-335-126218041699470/AnsiballZ_stat.py'
Jan 31 09:39:48 compute-0 sudo[71774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:48 compute-0 python3.9[71776]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:48 compute-0 sudo[71774]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:48 compute-0 sudo[71897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parfaphxsybydrtsgbsrjrkhmgxtcrvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852388.0820603-335-126218041699470/AnsiballZ_copy.py'
Jan 31 09:39:48 compute-0 sudo[71897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:48 compute-0 python3.9[71899]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852388.0820603-335-126218041699470/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:48 compute-0 sudo[71897]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:49 compute-0 sudo[72049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbzxyhlaicetycqqqymgjoscrnbxbaqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852389.1416743-350-247532365510999/AnsiballZ_command.py'
Jan 31 09:39:49 compute-0 sudo[72049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:49 compute-0 python3.9[72051]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:39:49 compute-0 sudo[72049]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:50 compute-0 sudo[72202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veezkzxvtqzoutewvceqizapfpoucbnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852389.714502-358-170898121501993/AnsiballZ_command.py'
Jan 31 09:39:50 compute-0 sudo[72202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:50 compute-0 python3.9[72204]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:39:50 compute-0 sudo[72202]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:50 compute-0 sudo[72355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flxyroqfwrgumihiznrdburshtlwvhot ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852390.5639603-366-255059033234191/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 09:39:50 compute-0 sudo[72355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:51 compute-0 python3[72357]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 09:39:51 compute-0 sudo[72355]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:51 compute-0 sudo[72507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbtsfwgrkfpowdxmvjzfcnacixodblli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852391.328322-374-150191209058076/AnsiballZ_stat.py'
Jan 31 09:39:51 compute-0 sudo[72507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:51 compute-0 python3.9[72509]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:51 compute-0 sudo[72507]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:52 compute-0 sudo[72630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxamnxrqjtrlotkvwllwecfneswvsgmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852391.328322-374-150191209058076/AnsiballZ_copy.py'
Jan 31 09:39:52 compute-0 sudo[72630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:52 compute-0 python3.9[72632]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852391.328322-374-150191209058076/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:52 compute-0 sudo[72630]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:52 compute-0 sudo[72782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomudkhimuwjlhglfzaruypoupwazmir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852392.3485067-389-207452990096086/AnsiballZ_stat.py'
Jan 31 09:39:52 compute-0 sudo[72782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:52 compute-0 python3.9[72784]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:52 compute-0 sudo[72782]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:53 compute-0 sudo[72905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqxhgnpxtweyyjxqhvklqwyujjzfxmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852392.3485067-389-207452990096086/AnsiballZ_copy.py'
Jan 31 09:39:53 compute-0 sudo[72905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:53 compute-0 python3.9[72907]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852392.3485067-389-207452990096086/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:53 compute-0 sudo[72905]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:53 compute-0 sudo[73057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzzipbjrmlsqvagzoklzmefbmbdahtxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852393.4269593-404-41061766803455/AnsiballZ_stat.py'
Jan 31 09:39:53 compute-0 sudo[73057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:53 compute-0 python3.9[73059]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:53 compute-0 sudo[73057]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:54 compute-0 sudo[73180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexhfiqknwjczidtgpadhphzmrfbcpnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852393.4269593-404-41061766803455/AnsiballZ_copy.py'
Jan 31 09:39:54 compute-0 sudo[73180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:54 compute-0 python3.9[73182]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852393.4269593-404-41061766803455/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:54 compute-0 sudo[73180]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:54 compute-0 sudo[73332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdrqfgjoiakzreagmhauttlmzeobqhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852394.4385228-419-246286445715108/AnsiballZ_stat.py'
Jan 31 09:39:54 compute-0 sudo[73332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:54 compute-0 python3.9[73334]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:54 compute-0 sudo[73332]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:55 compute-0 sudo[73455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmoxuypswvnzeogrmreoflugnnpvqkmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852394.4385228-419-246286445715108/AnsiballZ_copy.py'
Jan 31 09:39:55 compute-0 sudo[73455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:55 compute-0 python3.9[73457]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852394.4385228-419-246286445715108/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:55 compute-0 sudo[73455]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:55 compute-0 sudo[73607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eusdrmvvhxpfxdhipqnmzhhdgskxnool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852395.5936208-434-95994222953546/AnsiballZ_stat.py'
Jan 31 09:39:55 compute-0 sudo[73607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:56 compute-0 python3.9[73609]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:39:56 compute-0 sudo[73607]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:56 compute-0 sudo[73730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suoonivwvxyemyoaujozetbqcighocex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852395.5936208-434-95994222953546/AnsiballZ_copy.py'
Jan 31 09:39:56 compute-0 sudo[73730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:56 compute-0 python3.9[73732]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852395.5936208-434-95994222953546/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:56 compute-0 sudo[73730]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:56 compute-0 sudo[73882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehesbyrvahrplmtdwzeghaxvylqzreoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852396.6556456-449-225012598297602/AnsiballZ_file.py'
Jan 31 09:39:56 compute-0 sudo[73882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:57 compute-0 python3.9[73884]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:57 compute-0 sudo[73882]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:57 compute-0 sudo[74034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgxxzkcrclxpbstqakmvcaloozxplplo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852397.2025528-457-119525082883313/AnsiballZ_command.py'
Jan 31 09:39:57 compute-0 sudo[74034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:57 compute-0 python3.9[74036]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:39:57 compute-0 sudo[74034]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:58 compute-0 sudo[74193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czddwnvoaowsuzilkprjlsujrphnhifz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852397.787881-465-111003494060116/AnsiballZ_blockinfile.py'
Jan 31 09:39:58 compute-0 sudo[74193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:58 compute-0 python3.9[74195]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:58 compute-0 sudo[74193]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:58 compute-0 sudo[74346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzmzgxllmiqjkaspouhadctlhzlzmhvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852398.634415-474-61677852972913/AnsiballZ_file.py'
Jan 31 09:39:58 compute-0 sudo[74346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:59 compute-0 python3.9[74348]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:59 compute-0 sudo[74346]: pam_unix(sudo:session): session closed for user root
Jan 31 09:39:59 compute-0 sudo[74498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvbcdvhqjrzjvospvnegqdqqxypbqeha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852399.200044-474-261827538319823/AnsiballZ_file.py'
Jan 31 09:39:59 compute-0 sudo[74498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:39:59 compute-0 python3.9[74500]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:39:59 compute-0 sudo[74498]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:00 compute-0 sudo[74650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efsvpnkhuupvlekgnrxxxiimagjucnnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852399.806634-489-237903106072948/AnsiballZ_mount.py'
Jan 31 09:40:00 compute-0 sudo[74650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:00 compute-0 python3.9[74652]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 09:40:00 compute-0 sudo[74650]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:00 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:40:00 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:40:00 compute-0 sudo[74804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywvzqueebznhfmofeqmtqryvnyqpegc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852400.668895-489-54559907394626/AnsiballZ_mount.py'
Jan 31 09:40:00 compute-0 sudo[74804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:01 compute-0 python3.9[74806]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 09:40:01 compute-0 sudo[74804]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:01 compute-0 sshd-session[65647]: Connection closed by 192.168.122.30 port 51074
Jan 31 09:40:01 compute-0 sshd-session[65644]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:40:01 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 09:40:01 compute-0 systemd[1]: session-14.scope: Consumed 29.005s CPU time.
Jan 31 09:40:01 compute-0 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Jan 31 09:40:01 compute-0 systemd-logind[795]: Removed session 14.
Jan 31 09:40:06 compute-0 sshd-session[74832]: Accepted publickey for zuul from 192.168.122.30 port 39962 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:40:06 compute-0 systemd-logind[795]: New session 15 of user zuul.
Jan 31 09:40:06 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 31 09:40:06 compute-0 sshd-session[74832]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:40:07 compute-0 sudo[74985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqfbgymdbtfvivfdloskdfdpausmrdux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852406.7249312-16-201616873802628/AnsiballZ_tempfile.py'
Jan 31 09:40:07 compute-0 sudo[74985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:07 compute-0 python3.9[74987]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 09:40:07 compute-0 sudo[74985]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:07 compute-0 sudo[75137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzpqamtgsgrquxyunbyafcvflfrtgwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852407.5047119-28-231384130465912/AnsiballZ_stat.py'
Jan 31 09:40:07 compute-0 sudo[75137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:08 compute-0 python3.9[75139]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:40:08 compute-0 sudo[75137]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:08 compute-0 sudo[75289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgsdgdxcgtrhtccowytohjvnnucueksl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852408.294883-38-101618365479107/AnsiballZ_setup.py'
Jan 31 09:40:08 compute-0 sudo[75289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:09 compute-0 python3.9[75291]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:40:09 compute-0 sudo[75289]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:09 compute-0 sudo[75441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szjgpjhugqxdztyqcbyzvlucdhllazrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852409.2953904-47-144924074091237/AnsiballZ_blockinfile.py'
Jan 31 09:40:09 compute-0 sudo[75441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:09 compute-0 python3.9[75443]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCxf4BMMUSZQ31k1eDep0DvOEiIRkvYwRaMMLYg3el1VdccslSekTlzysNRGAwnp4Qgx29KUuNNkDhdzrCGx+Yc8O448a4fdafuDKFqlSyG2UaLUCTRiLazABrm0a4QOnxYbONZ4L2tj1Yq9zSp5C2NiZOWLsvlPKxctuqmzVnY1FflXoGcqfSLRS4ChnQxpaPvvcR2qA/GglbukSbojejlOEbxvJdd19dPZ+pvntZZ+K7Qe64k8p1T1mvKwMm9BrsY+T/i26vNvhysvc0Wyky3mDYI8eEKq/5R6Xc1gztiPv60CZe/UBGNrfs6g3W1p/Wln6eKm1bqKlarATvLhyxAwvyXaZOQL/mQR5gvp5aUYvo32NTqPgqRST8lzIgUY8E/SScqQ2PRSpybDLXPMSEYSFgjqTPnN2SUawOEgH57yVfyWqtm90q3VTqhoFvF11HC5EXp9PGGezEhe7lF5reKkF+W2KkqkaoDxoxquhPH9dbBau1S6rPMCXhTG2/rS8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIbQntirFiSqY0bWjuDsqQ2c8W43uCI/DyxLGvt1CnFb
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBMcGGd0H2rzxMk4FlHgzz5t3ne9KAn6F0oSx+SrVdsJx5qwiaIbVFIkct2/AZJdseD4Gq92DjbtWx+I8GpfNEw=
                                             create=True mode=0644 path=/tmp/ansible.y72xu23a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:09 compute-0 sudo[75441]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:10 compute-0 sudo[75593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwdhomhhrbslyzgyxgqvqtdaapqgtsxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852410.0085742-55-212917322634715/AnsiballZ_command.py'
Jan 31 09:40:10 compute-0 sudo[75593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:10 compute-0 python3.9[75595]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.y72xu23a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:40:10 compute-0 sudo[75593]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:11 compute-0 sudo[75747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdomuflyuzofynoticjibnkcetskoleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852410.7223814-63-163713933370164/AnsiballZ_file.py'
Jan 31 09:40:11 compute-0 sudo[75747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:11 compute-0 python3.9[75749]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.y72xu23a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:11 compute-0 sudo[75747]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:11 compute-0 sshd-session[74835]: Connection closed by 192.168.122.30 port 39962
Jan 31 09:40:11 compute-0 sshd-session[74832]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:40:11 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 09:40:11 compute-0 systemd[1]: session-15.scope: Consumed 2.854s CPU time.
Jan 31 09:40:11 compute-0 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Jan 31 09:40:11 compute-0 systemd-logind[795]: Removed session 15.
Jan 31 09:40:15 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 09:40:17 compute-0 sshd-session[75777]: Accepted publickey for zuul from 192.168.122.30 port 39494 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:40:17 compute-0 systemd-logind[795]: New session 16 of user zuul.
Jan 31 09:40:17 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 31 09:40:17 compute-0 sshd-session[75777]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:40:17 compute-0 python3.9[75930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:40:18 compute-0 sudo[76084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnfbkehmaggwuflhdsfwxhyebyzxyutx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852418.2617595-27-56501576141670/AnsiballZ_systemd.py'
Jan 31 09:40:18 compute-0 sudo[76084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:19 compute-0 python3.9[76086]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 09:40:19 compute-0 sudo[76084]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:19 compute-0 sudo[76238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvusjbhgsnxphheadgihbfcggjnglqpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852419.4170828-35-119695013487179/AnsiballZ_systemd.py'
Jan 31 09:40:19 compute-0 sudo[76238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:19 compute-0 python3.9[76240]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:40:20 compute-0 sudo[76238]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:20 compute-0 sudo[76391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apqcnrmxzltghjsqptwvwqcxcujnnlsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852420.1742785-44-236487480471268/AnsiballZ_command.py'
Jan 31 09:40:20 compute-0 sudo[76391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:20 compute-0 python3.9[76393]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:40:20 compute-0 sudo[76391]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:21 compute-0 sudo[76544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apbuyrstcwzpnhxvoiptsqfbgqthlvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852421.1069236-52-81421120424470/AnsiballZ_stat.py'
Jan 31 09:40:21 compute-0 sudo[76544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:21 compute-0 python3.9[76546]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:40:21 compute-0 sudo[76544]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:22 compute-0 sudo[76698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqggmlbjvgazfwkvblxqpqptnzzvtver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852421.9188752-60-156257304634747/AnsiballZ_command.py'
Jan 31 09:40:22 compute-0 sudo[76698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:22 compute-0 python3.9[76700]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:40:22 compute-0 sudo[76698]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:23 compute-0 sudo[76853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpdxkfapowgrhcsahrclnedhkxwlumt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852422.612008-68-20748759345723/AnsiballZ_file.py'
Jan 31 09:40:23 compute-0 sudo[76853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:23 compute-0 python3.9[76855]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:23 compute-0 sudo[76853]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:23 compute-0 sshd-session[75780]: Connection closed by 192.168.122.30 port 39494
Jan 31 09:40:23 compute-0 sshd-session[75777]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:40:23 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 09:40:23 compute-0 systemd[1]: session-16.scope: Consumed 3.859s CPU time.
Jan 31 09:40:23 compute-0 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Jan 31 09:40:23 compute-0 systemd-logind[795]: Removed session 16.
Jan 31 09:40:29 compute-0 sshd-session[76881]: Accepted publickey for zuul from 192.168.122.30 port 55796 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:40:29 compute-0 systemd-logind[795]: New session 17 of user zuul.
Jan 31 09:40:29 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 31 09:40:29 compute-0 sshd-session[76881]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:40:30 compute-0 python3.9[77034]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:40:31 compute-0 sudo[77188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptrcnyqrovwqgjkfdrmeyqpsxvaguoiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852431.0124638-29-42085644482446/AnsiballZ_setup.py'
Jan 31 09:40:31 compute-0 sudo[77188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:31 compute-0 python3.9[77190]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:40:31 compute-0 sudo[77188]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:32 compute-0 sudo[77272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilbautojuduvpuizjjgjsnmwcalziuqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852431.0124638-29-42085644482446/AnsiballZ_dnf.py'
Jan 31 09:40:32 compute-0 sudo[77272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:32 compute-0 python3.9[77274]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 09:40:34 compute-0 sudo[77272]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:34 compute-0 python3.9[77425]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:40:36 compute-0 python3.9[77576]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:40:36 compute-0 python3.9[77726]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:40:37 compute-0 python3.9[77876]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:40:38 compute-0 sshd-session[76884]: Connection closed by 192.168.122.30 port 55796
Jan 31 09:40:38 compute-0 sshd-session[76881]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:40:38 compute-0 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Jan 31 09:40:38 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 09:40:38 compute-0 systemd[1]: session-17.scope: Consumed 5.380s CPU time.
Jan 31 09:40:38 compute-0 systemd-logind[795]: Removed session 17.
Jan 31 09:40:45 compute-0 sshd-session[77901]: Accepted publickey for zuul from 192.168.122.30 port 47300 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:40:45 compute-0 systemd-logind[795]: New session 18 of user zuul.
Jan 31 09:40:45 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 31 09:40:45 compute-0 sshd-session[77901]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:40:46 compute-0 python3.9[78054]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:40:47 compute-0 sudo[78208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-momvpsrzqphxnmllwekfupqpukdgbtuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852447.1963248-45-77000521989303/AnsiballZ_file.py'
Jan 31 09:40:47 compute-0 sudo[78208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:47 compute-0 python3.9[78210]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:40:47 compute-0 sudo[78208]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:48 compute-0 sudo[78360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjpsiouhbwwefkibczdwrueyhvfqgybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852447.9015067-45-154151080011156/AnsiballZ_file.py'
Jan 31 09:40:48 compute-0 sudo[78360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:48 compute-0 python3.9[78362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:40:48 compute-0 sudo[78360]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:48 compute-0 sudo[78512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mulwyirxakcztnuvaewkppbpgoaouhbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852448.4935217-60-64836844434607/AnsiballZ_stat.py'
Jan 31 09:40:48 compute-0 sudo[78512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:49 compute-0 python3.9[78514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:49 compute-0 sudo[78512]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:49 compute-0 sudo[78635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcsglyiumnaifhjmbdfhsqnotedcdrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852448.4935217-60-64836844434607/AnsiballZ_copy.py'
Jan 31 09:40:49 compute-0 sudo[78635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:49 compute-0 python3.9[78637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852448.4935217-60-64836844434607/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f8677602071d1c9df3545a2f5046eda9c3e0390f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:49 compute-0 sudo[78635]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:50 compute-0 sudo[78787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooaijprnmgeqdyhdbbkjbycckxusbvig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852449.8922055-60-3273552757215/AnsiballZ_stat.py'
Jan 31 09:40:50 compute-0 sudo[78787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:50 compute-0 python3.9[78789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:50 compute-0 sudo[78787]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:50 compute-0 sudo[78910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihhewlufsqohmycxsdnsdvwboepqfise ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852449.8922055-60-3273552757215/AnsiballZ_copy.py'
Jan 31 09:40:50 compute-0 sudo[78910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:50 compute-0 python3.9[78912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852449.8922055-60-3273552757215/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=021ce851d899c5f6e52cceece925290d4d5ff372 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:50 compute-0 sudo[78910]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:51 compute-0 sudo[79062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epeicttxuldsttelgblwzyrfzzbxbdfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852450.8677778-60-32869705860563/AnsiballZ_stat.py'
Jan 31 09:40:51 compute-0 sudo[79062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:51 compute-0 python3.9[79064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:51 compute-0 sudo[79062]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:51 compute-0 sudo[79185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtsroqwhcijgreovgposizmhwrsrfvcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852450.8677778-60-32869705860563/AnsiballZ_copy.py'
Jan 31 09:40:51 compute-0 sudo[79185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:51 compute-0 python3.9[79187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852450.8677778-60-32869705860563/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=47483c0f9fe79433748f530ffff21495779a8b5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:51 compute-0 sudo[79185]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:52 compute-0 sudo[79337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhbvvahdwoqagudkqunurwjlscufzkwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852452.1161852-104-185106909335442/AnsiballZ_file.py'
Jan 31 09:40:52 compute-0 sudo[79337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:52 compute-0 python3.9[79339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:40:52 compute-0 sudo[79337]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:52 compute-0 sudo[79489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzslzhpbphoksrkpnuonetogdhakwqgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852452.65878-104-74218386128572/AnsiballZ_file.py'
Jan 31 09:40:52 compute-0 sudo[79489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:53 compute-0 python3.9[79491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:40:53 compute-0 sudo[79489]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:53 compute-0 sudo[79641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvprazeuqozpshbwqxfbfmvlvlzrrmkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852453.2828944-119-94235099802613/AnsiballZ_stat.py'
Jan 31 09:40:53 compute-0 sudo[79641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:53 compute-0 python3.9[79643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:53 compute-0 sudo[79641]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:53 compute-0 sudo[79764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlpsxarzvkuaorkxzzadifqhdinkinfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852453.2828944-119-94235099802613/AnsiballZ_copy.py'
Jan 31 09:40:53 compute-0 sudo[79764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:54 compute-0 python3.9[79766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852453.2828944-119-94235099802613/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=874d8890b3650c7f050c1ad086f0a5a9a0b8d4d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:54 compute-0 sudo[79764]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:54 compute-0 sudo[79916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abimuyumxmusmvokteuerwbxlzdxfdye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852454.3529468-119-76230923506802/AnsiballZ_stat.py'
Jan 31 09:40:54 compute-0 sudo[79916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:54 compute-0 python3.9[79918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:54 compute-0 sudo[79916]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:55 compute-0 sudo[80039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptkqltbtueyuravvfivtoqenjotebovc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852454.3529468-119-76230923506802/AnsiballZ_copy.py'
Jan 31 09:40:55 compute-0 sudo[80039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:55 compute-0 python3.9[80041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852454.3529468-119-76230923506802/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=021ce851d899c5f6e52cceece925290d4d5ff372 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:55 compute-0 sudo[80039]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:55 compute-0 sudo[80191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqkrazcqyysvblnekmnodspjqyzwvdbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852455.4829154-119-256702503132480/AnsiballZ_stat.py'
Jan 31 09:40:55 compute-0 sudo[80191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:55 compute-0 python3.9[80193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:55 compute-0 sudo[80191]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:56 compute-0 sudo[80314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqlagbxzarszeblxzddfdfmgqysqahgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852455.4829154-119-256702503132480/AnsiballZ_copy.py'
Jan 31 09:40:56 compute-0 sudo[80314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:56 compute-0 python3.9[80316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852455.4829154-119-256702503132480/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a19097ad1fe6e6914a7565cced156114482fc0dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:56 compute-0 sudo[80314]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:56 compute-0 sudo[80466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-draungjpqhqrkfoqtngugxchdqeldpgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852456.623768-163-165832166091452/AnsiballZ_file.py'
Jan 31 09:40:56 compute-0 sudo[80466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:57 compute-0 python3.9[80468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:40:57 compute-0 sudo[80466]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:57 compute-0 sudo[80618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eryfpidadwcqeviwzakdxobiscxsgzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852457.191848-163-272651757094890/AnsiballZ_file.py'
Jan 31 09:40:57 compute-0 sudo[80618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:57 compute-0 python3.9[80620]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:40:57 compute-0 sudo[80618]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:58 compute-0 sudo[80770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzzqxrgcrlfibskqprwssbiuytgpgcbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852457.799676-178-91027745866685/AnsiballZ_stat.py'
Jan 31 09:40:58 compute-0 sudo[80770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:58 compute-0 python3.9[80772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:58 compute-0 sudo[80770]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:58 compute-0 sudo[80893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehoixgohwlagfbdknnuznbjmaujqguz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852457.799676-178-91027745866685/AnsiballZ_copy.py'
Jan 31 09:40:58 compute-0 sudo[80893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:58 compute-0 python3.9[80895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852457.799676-178-91027745866685/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fd650ce5fa63dc76ebb93eb19aac6f3715557369 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:58 compute-0 sudo[80893]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:59 compute-0 sudo[81045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxhuutnhktpppzghhimblzqlxlxoifwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852458.8312807-178-94727958060067/AnsiballZ_stat.py'
Jan 31 09:40:59 compute-0 sudo[81045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:59 compute-0 python3.9[81047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:40:59 compute-0 sudo[81045]: pam_unix(sudo:session): session closed for user root
Jan 31 09:40:59 compute-0 sudo[81168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekdmwpiweeutmfqlprubhlihrurhwtkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852458.8312807-178-94727958060067/AnsiballZ_copy.py'
Jan 31 09:40:59 compute-0 sudo[81168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:40:59 compute-0 python3.9[81170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852458.8312807-178-94727958060067/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ddefbeb02c9e56b2ea7ae8600f7f69c4ff1798ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:40:59 compute-0 sudo[81168]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:00 compute-0 sudo[81320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezhleypbpjxksllpbhhyabzviaaizkqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852459.8749452-178-253124835189955/AnsiballZ_stat.py'
Jan 31 09:41:00 compute-0 sudo[81320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:00 compute-0 python3.9[81322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:00 compute-0 sudo[81320]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:00 compute-0 sudo[81443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkleclcqcbkcbnddgfjvbpmmfhtbwzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852459.8749452-178-253124835189955/AnsiballZ_copy.py'
Jan 31 09:41:00 compute-0 sudo[81443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:00 compute-0 python3.9[81445]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852459.8749452-178-253124835189955/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cfc1462dee8a162af8b4d36d7cd796ac75882818 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:00 compute-0 sudo[81443]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:01 compute-0 sudo[81595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxhvabsjrfhgxisrkdaiyrnqhpclcwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852461.0274155-222-176621289128658/AnsiballZ_file.py'
Jan 31 09:41:01 compute-0 sudo[81595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:01 compute-0 python3.9[81597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:01 compute-0 sudo[81595]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:01 compute-0 sudo[81747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbqndtucwhemwhwktsvpxrmjikywestm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852461.6530602-222-83521518800759/AnsiballZ_file.py'
Jan 31 09:41:01 compute-0 sudo[81747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:02 compute-0 python3.9[81749]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:02 compute-0 sudo[81747]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:02 compute-0 sudo[81899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdicmpikpqwejegptdlvaazhdhxlsjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852462.2710266-237-190613008517034/AnsiballZ_stat.py'
Jan 31 09:41:02 compute-0 sudo[81899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:02 compute-0 python3.9[81901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:02 compute-0 sudo[81899]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:02 compute-0 sudo[82022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxysdqxyiasywlmmutmiachiqomqgmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852462.2710266-237-190613008517034/AnsiballZ_copy.py'
Jan 31 09:41:02 compute-0 sudo[82022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:03 compute-0 python3.9[82024]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852462.2710266-237-190613008517034/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eea421cc08def125922eae026750682b89be04a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:03 compute-0 sudo[82022]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:03 compute-0 sudo[82174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tahxfqbyjrbilcufhbbolczjellmpklf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852463.3069365-237-104182403325476/AnsiballZ_stat.py'
Jan 31 09:41:03 compute-0 sudo[82174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:03 compute-0 python3.9[82176]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:03 compute-0 sudo[82174]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:03 compute-0 sudo[82297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljheffrpmxtfwyqmtyeudxteqbjvvefn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852463.3069365-237-104182403325476/AnsiballZ_copy.py'
Jan 31 09:41:03 compute-0 sudo[82297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:04 compute-0 python3.9[82299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852463.3069365-237-104182403325476/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=61856d85daf81e8622d91e176bb3163095befd60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:04 compute-0 sudo[82297]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:04 compute-0 sudo[82449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbkkoetfmqphjyafcacdhkqesvfdhbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852464.2997866-237-31777200572691/AnsiballZ_stat.py'
Jan 31 09:41:04 compute-0 sudo[82449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:04 compute-0 python3.9[82451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:04 compute-0 sudo[82449]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:05 compute-0 sudo[82572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkugrmvdbzxzejajqdidtldmaxlxumjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852464.2997866-237-31777200572691/AnsiballZ_copy.py'
Jan 31 09:41:05 compute-0 sudo[82572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:05 compute-0 python3.9[82574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852464.2997866-237-31777200572691/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a081056b927f21014d995951c2a59f3afc6828f9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:05 compute-0 sudo[82572]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:05 compute-0 sudo[82724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xygicrflgkavermmiorhpdkkdfrkxfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852465.441954-281-209325174966175/AnsiballZ_file.py'
Jan 31 09:41:05 compute-0 sudo[82724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:05 compute-0 python3.9[82726]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:05 compute-0 sudo[82724]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:06 compute-0 sudo[82876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrsrrlysinkycuzglibxtbihkoequhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852466.0551937-281-20443445124903/AnsiballZ_file.py'
Jan 31 09:41:06 compute-0 sudo[82876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:06 compute-0 python3.9[82878]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:06 compute-0 sudo[82876]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:06 compute-0 sudo[83028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahjpczbfipeecqpulasazmfkkahjfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852466.7221382-296-156495031563842/AnsiballZ_stat.py'
Jan 31 09:41:06 compute-0 sudo[83028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:07 compute-0 python3.9[83030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:07 compute-0 sudo[83028]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:07 compute-0 sudo[83151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tclxwabcjziwlfuqoqauxqtzpfzirmyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852466.7221382-296-156495031563842/AnsiballZ_copy.py'
Jan 31 09:41:07 compute-0 sudo[83151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:07 compute-0 python3.9[83153]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852466.7221382-296-156495031563842/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=aceb076ff2e66fb54c20e49ca52c938a60b73ab5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:07 compute-0 sudo[83151]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:08 compute-0 sudo[83303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivvwccnarzthquqyccnozorlovxzhttd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852467.8132803-296-71345378739135/AnsiballZ_stat.py'
Jan 31 09:41:08 compute-0 sudo[83303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:08 compute-0 python3.9[83305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:08 compute-0 sudo[83303]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:08 compute-0 sudo[83426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ampwhhaqghtpqizoecjbfacjtyvaplqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852467.8132803-296-71345378739135/AnsiballZ_copy.py'
Jan 31 09:41:08 compute-0 sudo[83426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:08 compute-0 python3.9[83428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852467.8132803-296-71345378739135/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ddefbeb02c9e56b2ea7ae8600f7f69c4ff1798ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:08 compute-0 sudo[83426]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:09 compute-0 sudo[83578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zglqjaynehvvpadzhhngvneakmubnrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852468.8425386-296-61416878877112/AnsiballZ_stat.py'
Jan 31 09:41:09 compute-0 sudo[83578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:09 compute-0 python3.9[83580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:09 compute-0 sudo[83578]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:09 compute-0 sudo[83701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwcjrtjwordedyjgemgxhclnqmjszrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852468.8425386-296-61416878877112/AnsiballZ_copy.py'
Jan 31 09:41:09 compute-0 sudo[83701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:09 compute-0 python3.9[83703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852468.8425386-296-61416878877112/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d6bbae567e401bb91218b85daf638802b7bcb4b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:09 compute-0 sudo[83701]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:10 compute-0 sudo[83853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqbyqxvspuouiajqtinzwzbtsfnaqytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852470.4622536-356-172053402163984/AnsiballZ_file.py'
Jan 31 09:41:10 compute-0 sudo[83853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:10 compute-0 python3.9[83855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:10 compute-0 sudo[83853]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:11 compute-0 sudo[84005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmqsxbwapoijowevkusiterpwwzjjzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852471.0778399-364-114871457449110/AnsiballZ_stat.py'
Jan 31 09:41:11 compute-0 sudo[84005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:11 compute-0 python3.9[84007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:11 compute-0 sudo[84005]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:12 compute-0 sudo[84128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxlungblrzabkuyyeomnpajotjpttzim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852471.0778399-364-114871457449110/AnsiballZ_copy.py'
Jan 31 09:41:12 compute-0 sudo[84128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:12 compute-0 python3.9[84130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852471.0778399-364-114871457449110/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:12 compute-0 sudo[84128]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:12 compute-0 sudo[84280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qadwmxyovmshydxrisbcyvdjmzswcfka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852472.554496-380-118277584818654/AnsiballZ_file.py'
Jan 31 09:41:12 compute-0 sudo[84280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:12 compute-0 python3.9[84282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:13 compute-0 sudo[84280]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:13 compute-0 sudo[84432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkijukldpbbjmechitfidnarydomyepo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852473.1431646-388-221544250780402/AnsiballZ_stat.py'
Jan 31 09:41:13 compute-0 sudo[84432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:13 compute-0 python3.9[84434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:13 compute-0 sudo[84432]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:13 compute-0 sudo[84555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtihpponlywupheaawgdmlovbszfzutm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852473.1431646-388-221544250780402/AnsiballZ_copy.py'
Jan 31 09:41:13 compute-0 sudo[84555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:14 compute-0 python3.9[84557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852473.1431646-388-221544250780402/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:14 compute-0 sudo[84555]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:14 compute-0 sudo[84707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koscohbvkovrwfdpswfbetzoggfloips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852474.280222-404-243340857511396/AnsiballZ_file.py'
Jan 31 09:41:14 compute-0 sudo[84707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:14 compute-0 python3.9[84709]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:14 compute-0 sudo[84707]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:15 compute-0 sudo[84859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqtdqibzqvknojavsmzrxryixsaxxycz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852474.8402407-412-233615223320978/AnsiballZ_stat.py'
Jan 31 09:41:15 compute-0 sudo[84859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:15 compute-0 python3.9[84861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:15 compute-0 sudo[84859]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:15 compute-0 sudo[84982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfzrqglgipjkiviagmoecjuewtlhnber ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852474.8402407-412-233615223320978/AnsiballZ_copy.py'
Jan 31 09:41:15 compute-0 sudo[84982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:15 compute-0 python3.9[84984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852474.8402407-412-233615223320978/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:15 compute-0 sudo[84982]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:16 compute-0 sudo[85134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmeenpbpfcpxbkngvltzwpugumavwbcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852475.9504585-428-197244902450732/AnsiballZ_file.py'
Jan 31 09:41:16 compute-0 sudo[85134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:16 compute-0 python3.9[85136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:16 compute-0 sudo[85134]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:16 compute-0 sudo[85286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvafnxtgiplxesjvdmreyhnfjdqjwipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852476.6185064-436-66664546341442/AnsiballZ_stat.py'
Jan 31 09:41:16 compute-0 sudo[85286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:17 compute-0 python3.9[85288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:17 compute-0 sudo[85286]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:17 compute-0 sudo[85409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbquhhlwdksurkwfxdfffkcslnmybefo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852476.6185064-436-66664546341442/AnsiballZ_copy.py'
Jan 31 09:41:17 compute-0 sudo[85409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:17 compute-0 python3.9[85411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852476.6185064-436-66664546341442/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:17 compute-0 sudo[85409]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:18 compute-0 sudo[85561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitoebvrvfnygemtkvsrjttozfmrzdpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852477.873251-452-225039346115189/AnsiballZ_file.py'
Jan 31 09:41:18 compute-0 sudo[85561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:18 compute-0 python3.9[85563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:18 compute-0 sudo[85561]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:18 compute-0 sudo[85713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrvqzmwuimaslxjhbxvuyhzjhadgcmsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852478.4960644-460-270836647128629/AnsiballZ_stat.py'
Jan 31 09:41:18 compute-0 sudo[85713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:18 compute-0 python3.9[85715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:18 compute-0 sudo[85713]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:19 compute-0 sudo[85836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxeurjrymykxrwosccfpumgbhgqeixry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852478.4960644-460-270836647128629/AnsiballZ_copy.py'
Jan 31 09:41:19 compute-0 sudo[85836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:19 compute-0 python3.9[85838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852478.4960644-460-270836647128629/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:19 compute-0 sudo[85836]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:19 compute-0 sudo[85988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdcwhprbvlamrjqtdlwifwxrfzidscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852479.6924813-476-14903273541894/AnsiballZ_file.py'
Jan 31 09:41:19 compute-0 sudo[85988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:20 compute-0 python3.9[85990]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:20 compute-0 sudo[85988]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:20 compute-0 chronyd[65618]: Selected source 216.232.132.102 (pool.ntp.org)
Jan 31 09:41:20 compute-0 sudo[86140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgrakonebafhofmifxkqkzfgcmuptpye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852480.2659686-484-54695161259893/AnsiballZ_stat.py'
Jan 31 09:41:20 compute-0 sudo[86140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:20 compute-0 python3.9[86142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:20 compute-0 sudo[86140]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:21 compute-0 sudo[86263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cayhypxehbskkqblhlvtfqmijwtsucmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852480.2659686-484-54695161259893/AnsiballZ_copy.py'
Jan 31 09:41:21 compute-0 sudo[86263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:21 compute-0 python3.9[86265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852480.2659686-484-54695161259893/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:21 compute-0 sudo[86263]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:21 compute-0 sudo[86415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viaussegkeogosaglnyzjbjfnjomvgtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852481.4553485-500-275814706398317/AnsiballZ_file.py'
Jan 31 09:41:21 compute-0 sudo[86415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:21 compute-0 python3.9[86417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:21 compute-0 sudo[86415]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:22 compute-0 sudo[86567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-domjgktqaxrsihbsnwopxcvqyilmknsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852482.0445545-508-281193724162641/AnsiballZ_stat.py'
Jan 31 09:41:22 compute-0 sudo[86567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:22 compute-0 python3.9[86569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:22 compute-0 sudo[86567]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:22 compute-0 sudo[86690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwdfboujdbmrkrzycfjefgfbsvyuwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852482.0445545-508-281193724162641/AnsiballZ_copy.py'
Jan 31 09:41:22 compute-0 sudo[86690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:22 compute-0 python3.9[86692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852482.0445545-508-281193724162641/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:22 compute-0 sudo[86690]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:23 compute-0 sudo[86842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysiwlmxzmxpnrazboapiomxwobayrbqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852483.1315436-524-278842930088068/AnsiballZ_file.py'
Jan 31 09:41:23 compute-0 sudo[86842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:23 compute-0 python3.9[86844]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:23 compute-0 sudo[86842]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:23 compute-0 sudo[86994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwjiohxbqfzgkpubesxkmrorewsvngmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852483.729118-532-133850137594778/AnsiballZ_stat.py'
Jan 31 09:41:23 compute-0 sudo[86994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:24 compute-0 python3.9[86996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:24 compute-0 sudo[86994]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:24 compute-0 sudo[87117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ectrbueofuhewfkeygmabrvaotnuxxdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852483.729118-532-133850137594778/AnsiballZ_copy.py'
Jan 31 09:41:24 compute-0 sudo[87117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:24 compute-0 python3.9[87119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852483.729118-532-133850137594778/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=2e43c1592b26a27594aa50228fef3bc5ccb02015 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:24 compute-0 sudo[87117]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:25 compute-0 sshd-session[77904]: Connection closed by 192.168.122.30 port 47300
Jan 31 09:41:25 compute-0 sshd-session[77901]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:41:25 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 09:41:25 compute-0 systemd[1]: session-18.scope: Consumed 29.851s CPU time.
Jan 31 09:41:25 compute-0 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Jan 31 09:41:25 compute-0 systemd-logind[795]: Removed session 18.
Jan 31 09:41:31 compute-0 sshd-session[87144]: Accepted publickey for zuul from 192.168.122.30 port 39736 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:41:31 compute-0 systemd-logind[795]: New session 19 of user zuul.
Jan 31 09:41:31 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 31 09:41:31 compute-0 sshd-session[87144]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:41:32 compute-0 python3.9[87297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:41:33 compute-0 sudo[87451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aghdgolidaqbzwtzputdmjhdchjxuaeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852492.5835285-29-86104793068843/AnsiballZ_file.py'
Jan 31 09:41:33 compute-0 sudo[87451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:33 compute-0 python3.9[87453]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:33 compute-0 sudo[87451]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:33 compute-0 sudo[87603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzznadyyutdhvqnvuqeihjbuqdmousqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852493.3245304-29-185447151068323/AnsiballZ_file.py'
Jan 31 09:41:33 compute-0 sudo[87603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:33 compute-0 python3.9[87605]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:41:33 compute-0 sudo[87603]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:34 compute-0 python3.9[87755]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:41:35 compute-0 sudo[87905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbnjaeetebvhnquwevsridbrlnwmkggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852494.604102-52-139681130978588/AnsiballZ_seboolean.py'
Jan 31 09:41:35 compute-0 sudo[87905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:35 compute-0 python3.9[87907]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 09:41:36 compute-0 sudo[87905]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:36 compute-0 sudo[88061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqsvpttdrlosfhyehhgjrjrujxwyvjav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852496.4959803-62-240703941480037/AnsiballZ_setup.py'
Jan 31 09:41:36 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 09:41:36 compute-0 sudo[88061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:37 compute-0 python3.9[88063]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:41:37 compute-0 sudo[88061]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:37 compute-0 sudo[88145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyxaovujjdlukoxultwrspcmyngtttkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852496.4959803-62-240703941480037/AnsiballZ_dnf.py'
Jan 31 09:41:37 compute-0 sudo[88145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:37 compute-0 python3.9[88147]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:41:39 compute-0 sudo[88145]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:39 compute-0 sudo[88298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiekrrkpwvcfmmwltznjnkguntsehefc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852499.1641529-74-161665177046417/AnsiballZ_systemd.py'
Jan 31 09:41:39 compute-0 sudo[88298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:40 compute-0 python3.9[88300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:41:40 compute-0 sudo[88298]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:40 compute-0 sudo[88453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlbibhkoqavjdzoqhhvypeaovghusfik ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852500.2591143-82-214680868378373/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 09:41:40 compute-0 sudo[88453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:40 compute-0 python3[88455]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 09:41:40 compute-0 sudo[88453]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:41 compute-0 sudo[88605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklgrqxpvntpoqgnepexwctulstomyds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852501.155524-91-146534447803752/AnsiballZ_file.py'
Jan 31 09:41:41 compute-0 sudo[88605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:41 compute-0 python3.9[88607]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:41 compute-0 sudo[88605]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:42 compute-0 sudo[88757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbxtbfkfeethphaxubcribfkfjjyomi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852501.6946807-99-49258210700267/AnsiballZ_stat.py'
Jan 31 09:41:42 compute-0 sudo[88757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:42 compute-0 python3.9[88759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:42 compute-0 sudo[88757]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:42 compute-0 sudo[88835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeurlzlqfanvceshayprmkxesakhmbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852501.6946807-99-49258210700267/AnsiballZ_file.py'
Jan 31 09:41:42 compute-0 sudo[88835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:42 compute-0 python3.9[88837]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:42 compute-0 sudo[88835]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:43 compute-0 sudo[88987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwyjquqwtxtlbgoqzmfsbzroxuzvbcuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852502.880176-111-124162553159977/AnsiballZ_stat.py'
Jan 31 09:41:43 compute-0 sudo[88987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:43 compute-0 python3.9[88989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:43 compute-0 sudo[88987]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:43 compute-0 sudo[89065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moukmsvzdcwemdxuvenhsoyihggnzate ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852502.880176-111-124162553159977/AnsiballZ_file.py'
Jan 31 09:41:43 compute-0 sudo[89065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:43 compute-0 python3.9[89067]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9u6ij9xf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:43 compute-0 sudo[89065]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:44 compute-0 sudo[89217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkrirqqpjqsmuwfymqxfzpqunqgledsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852503.8547983-123-70475752779291/AnsiballZ_stat.py'
Jan 31 09:41:44 compute-0 sudo[89217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:44 compute-0 python3.9[89219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:44 compute-0 sudo[89217]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:44 compute-0 sudo[89295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwkssejbftptptlnfwnhelxfdmiziqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852503.8547983-123-70475752779291/AnsiballZ_file.py'
Jan 31 09:41:44 compute-0 sudo[89295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:44 compute-0 python3.9[89297]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:44 compute-0 sudo[89295]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:45 compute-0 sudo[89447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syxkpeavlkkgucrvtdmkcggpjutlewci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852504.9159338-136-183472361709389/AnsiballZ_command.py'
Jan 31 09:41:45 compute-0 sudo[89447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:45 compute-0 python3.9[89449]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:41:45 compute-0 sudo[89447]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:46 compute-0 sudo[89600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqvmydmijntiyzsmrkyxmuegiyqqrtvj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852505.7853265-144-118029925909009/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 09:41:46 compute-0 sudo[89600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:46 compute-0 python3[89602]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 09:41:46 compute-0 sudo[89600]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:46 compute-0 sudo[89752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asieufyopravfhmufylymvfnevwxbrtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852506.5218487-152-155278435193253/AnsiballZ_stat.py'
Jan 31 09:41:46 compute-0 sudo[89752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:46 compute-0 python3.9[89754]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:46 compute-0 sudo[89752]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:47 compute-0 sudo[89877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzmcjoeqxagosymtxcaflaugrmmytyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852506.5218487-152-155278435193253/AnsiballZ_copy.py'
Jan 31 09:41:47 compute-0 sudo[89877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:47 compute-0 python3.9[89879]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852506.5218487-152-155278435193253/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:47 compute-0 sudo[89877]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:48 compute-0 sudo[90029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvrxatvtptwwsvlrnlzwnwnajcdfioys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852507.7571664-167-195701512326641/AnsiballZ_stat.py'
Jan 31 09:41:48 compute-0 sudo[90029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:48 compute-0 python3.9[90031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:48 compute-0 sudo[90029]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:48 compute-0 sudo[90154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mapiqycgyybrogwhuwqxkjkgvyjdavuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852507.7571664-167-195701512326641/AnsiballZ_copy.py'
Jan 31 09:41:48 compute-0 sudo[90154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:48 compute-0 python3.9[90156]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852507.7571664-167-195701512326641/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:48 compute-0 sudo[90154]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:49 compute-0 sudo[90306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdcrrakqgmsjpnvwhoavrrziklurvgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852508.9249403-182-52429177218274/AnsiballZ_stat.py'
Jan 31 09:41:49 compute-0 sudo[90306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:49 compute-0 python3.9[90308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:49 compute-0 sudo[90306]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:49 compute-0 sudo[90431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiiblhuukqbnmspolikvudyodryujcmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852508.9249403-182-52429177218274/AnsiballZ_copy.py'
Jan 31 09:41:49 compute-0 sudo[90431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:49 compute-0 python3.9[90433]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852508.9249403-182-52429177218274/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:49 compute-0 sudo[90431]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:50 compute-0 sudo[90583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbheywiovkwiprifscrvhvvplkhojozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852510.0963044-197-127519973782827/AnsiballZ_stat.py'
Jan 31 09:41:50 compute-0 sudo[90583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:50 compute-0 python3.9[90585]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:50 compute-0 sudo[90583]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:50 compute-0 sudo[90708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffkwqjjrjphwkdehradholxihzztojlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852510.0963044-197-127519973782827/AnsiballZ_copy.py'
Jan 31 09:41:50 compute-0 sudo[90708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:51 compute-0 python3.9[90710]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852510.0963044-197-127519973782827/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:51 compute-0 sudo[90708]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:51 compute-0 sudo[90860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqzdqcfnretzpnaqgceduntohvmgmgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852511.243964-212-214096691571910/AnsiballZ_stat.py'
Jan 31 09:41:51 compute-0 sudo[90860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:51 compute-0 python3.9[90862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:41:51 compute-0 sudo[90860]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:52 compute-0 sudo[90985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mygebxnvewjdeycttiqovsibflguxgld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852511.243964-212-214096691571910/AnsiballZ_copy.py'
Jan 31 09:41:52 compute-0 sudo[90985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:52 compute-0 python3.9[90987]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852511.243964-212-214096691571910/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:52 compute-0 sudo[90985]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:52 compute-0 sudo[91137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jypdmzbasaolwlcxmbwvxlgtjdowlqsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852512.3440878-227-244586457725801/AnsiballZ_file.py'
Jan 31 09:41:52 compute-0 sudo[91137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:52 compute-0 python3.9[91139]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:52 compute-0 sudo[91137]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:53 compute-0 sudo[91289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrmpbsyjczhwoktdjbqsylhpzvdhjbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852512.885369-235-109573397124507/AnsiballZ_command.py'
Jan 31 09:41:53 compute-0 sudo[91289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:53 compute-0 python3.9[91291]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:41:53 compute-0 sudo[91289]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:53 compute-0 sudo[91444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikbqqxcdppiywogiperjqlehfqyfhbjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852513.4611113-243-71168735986417/AnsiballZ_blockinfile.py'
Jan 31 09:41:53 compute-0 sudo[91444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:53 compute-0 python3.9[91446]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:54 compute-0 sudo[91444]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:54 compute-0 sudo[91596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smwviwizxgakwbmbgpnkbgffkjuuzcer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852514.4211338-252-22444579551218/AnsiballZ_command.py'
Jan 31 09:41:54 compute-0 sudo[91596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:54 compute-0 python3.9[91598]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:41:54 compute-0 sudo[91596]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:55 compute-0 sudo[91749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkmymohfhtfmkbtsiwcmzpkxpnjdwajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852514.960688-260-60029421224183/AnsiballZ_stat.py'
Jan 31 09:41:55 compute-0 sudo[91749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:55 compute-0 python3.9[91751]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:41:55 compute-0 sudo[91749]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:55 compute-0 sudo[91903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwxqsovtmkghedvnspmcwjbqusdnjeoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852515.5791655-268-64259477982733/AnsiballZ_command.py'
Jan 31 09:41:55 compute-0 sudo[91903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:56 compute-0 python3.9[91905]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:41:56 compute-0 sudo[91903]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:56 compute-0 sudo[92058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsobiqidkazzcenbycnzhevrxchuqwpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852516.1865914-276-208636335689768/AnsiballZ_file.py'
Jan 31 09:41:56 compute-0 sudo[92058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:56 compute-0 python3.9[92060]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:41:56 compute-0 sudo[92058]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:57 compute-0 python3.9[92210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:41:58 compute-0 sudo[92361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygxmmlrciuztcammptsssvmckkrmsghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852518.5098078-316-36133389754244/AnsiballZ_command.py'
Jan 31 09:41:58 compute-0 sudo[92361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:58 compute-0 python3.9[92363]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:41:58 compute-0 ovs-vsctl[92364]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 09:41:58 compute-0 sudo[92361]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:59 compute-0 sudo[92514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekmuvgllwiecpihkejhqyrtbveizfveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852519.1282883-325-62133725812755/AnsiballZ_command.py'
Jan 31 09:41:59 compute-0 sudo[92514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:41:59 compute-0 python3.9[92516]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:41:59 compute-0 sudo[92514]: pam_unix(sudo:session): session closed for user root
Jan 31 09:41:59 compute-0 sudo[92669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edzmatihldrbcrmapavhdkutbmgylpzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852519.7094326-333-176212678271481/AnsiballZ_command.py'
Jan 31 09:41:59 compute-0 sudo[92669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:00 compute-0 python3.9[92671]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:42:00 compute-0 ovs-vsctl[92672]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 09:42:00 compute-0 sudo[92669]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:00 compute-0 python3.9[92822]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:42:01 compute-0 sudo[92974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qddfmwtakraqmekjrbqaamrlkokdiekt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852520.9985871-350-278144397773440/AnsiballZ_file.py'
Jan 31 09:42:01 compute-0 sudo[92974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:01 compute-0 python3.9[92976]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:01 compute-0 sudo[92974]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:01 compute-0 sudo[93126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrvbvvixzimgyanzubnbrqzcgjfnwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852521.7366705-358-64147832090935/AnsiballZ_stat.py'
Jan 31 09:42:01 compute-0 sudo[93126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:02 compute-0 python3.9[93128]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:02 compute-0 sudo[93126]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:02 compute-0 sudo[93204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtehophldjurnbdptthlcdyhtcopffqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852521.7366705-358-64147832090935/AnsiballZ_file.py'
Jan 31 09:42:02 compute-0 sudo[93204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:02 compute-0 python3.9[93206]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:02 compute-0 sudo[93204]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:03 compute-0 sudo[93356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwelpvygtowwndygbqvstavojlwhbmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852522.814557-358-50381750291363/AnsiballZ_stat.py'
Jan 31 09:42:03 compute-0 sudo[93356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:03 compute-0 python3.9[93358]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:03 compute-0 sudo[93356]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:03 compute-0 sudo[93434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzuxgkpxtilpbuzutbvazebxsvkpslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852522.814557-358-50381750291363/AnsiballZ_file.py'
Jan 31 09:42:03 compute-0 sudo[93434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:03 compute-0 python3.9[93436]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:03 compute-0 sudo[93434]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:04 compute-0 sudo[93586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygvfblvemweozoluvzdbhjqcposmqlzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852523.7767131-381-179066738213990/AnsiballZ_file.py'
Jan 31 09:42:04 compute-0 sudo[93586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:04 compute-0 python3.9[93588]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:04 compute-0 sudo[93586]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:04 compute-0 sudo[93738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzevbqoxdtvuirxtvdvhmxmsrkntvxpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852524.3759131-389-95621693352847/AnsiballZ_stat.py'
Jan 31 09:42:04 compute-0 sudo[93738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:04 compute-0 python3.9[93740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:04 compute-0 sudo[93738]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:05 compute-0 sudo[93816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflfahjcxcejfajwwzhvmxomuqrvvpwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852524.3759131-389-95621693352847/AnsiballZ_file.py'
Jan 31 09:42:05 compute-0 sudo[93816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:05 compute-0 python3.9[93818]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:05 compute-0 sudo[93816]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:05 compute-0 sudo[93968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irtjoherqeswlvwgqavpwsdzeikzorwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852525.4479508-401-72812524132927/AnsiballZ_stat.py'
Jan 31 09:42:05 compute-0 sudo[93968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:05 compute-0 python3.9[93970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:05 compute-0 sudo[93968]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:06 compute-0 sudo[94046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgwgoaibftwiuacnnpdevgonhmmorxsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852525.4479508-401-72812524132927/AnsiballZ_file.py'
Jan 31 09:42:06 compute-0 sudo[94046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:06 compute-0 python3.9[94048]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:06 compute-0 sudo[94046]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:06 compute-0 sudo[94198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmrcdnkjxjdoxddubrtsipyeigwoelw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852526.438086-413-148530266872987/AnsiballZ_systemd.py'
Jan 31 09:42:06 compute-0 sudo[94198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:07 compute-0 python3.9[94200]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:42:07 compute-0 systemd[1]: Reloading.
Jan 31 09:42:07 compute-0 systemd-rc-local-generator[94228]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:42:07 compute-0 systemd-sysv-generator[94232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:42:07 compute-0 sudo[94198]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:07 compute-0 sudo[94388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdoyqtymshaykqxiacqnbcnfiexpaqgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852527.4252248-421-8493843544780/AnsiballZ_stat.py'
Jan 31 09:42:07 compute-0 sudo[94388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:07 compute-0 python3.9[94390]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:07 compute-0 sudo[94388]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:08 compute-0 sudo[94466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cauoepvuzznolgngwnckcecjkvqbloqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852527.4252248-421-8493843544780/AnsiballZ_file.py'
Jan 31 09:42:08 compute-0 sudo[94466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:08 compute-0 python3.9[94468]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:08 compute-0 sudo[94466]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:08 compute-0 sudo[94618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvsidziowtffguvzylmtbjymatkrwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852528.5271034-433-273975746207971/AnsiballZ_stat.py'
Jan 31 09:42:08 compute-0 sudo[94618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:08 compute-0 python3.9[94620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:09 compute-0 sudo[94618]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:09 compute-0 sudo[94697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujxdqywqplljdvarhgplrjxynnfrxcmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852528.5271034-433-273975746207971/AnsiballZ_file.py'
Jan 31 09:42:09 compute-0 sudo[94697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:09 compute-0 python3.9[94699]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:09 compute-0 sudo[94697]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:09 compute-0 sudo[94849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaehklyqqkratszvbxjjltgbistksfoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852529.5192108-445-211407509835277/AnsiballZ_systemd.py'
Jan 31 09:42:09 compute-0 sudo[94849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:10 compute-0 python3.9[94851]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:42:10 compute-0 systemd[1]: Reloading.
Jan 31 09:42:10 compute-0 systemd-sysv-generator[94883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:42:10 compute-0 systemd-rc-local-generator[94877]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:42:10 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 09:42:10 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 09:42:10 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 09:42:10 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 09:42:10 compute-0 sudo[94849]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:10 compute-0 sudo[95044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmelspshrnygamlmtpovvvvhlkvzzckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852530.5589564-455-144457905778609/AnsiballZ_file.py'
Jan 31 09:42:10 compute-0 sudo[95044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:11 compute-0 python3.9[95046]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:11 compute-0 sudo[95044]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:11 compute-0 sudo[95196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejzyjftsmwzilkcorpolnmkrxxlvuqbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852531.394693-463-218455976220867/AnsiballZ_stat.py'
Jan 31 09:42:11 compute-0 sudo[95196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:11 compute-0 python3.9[95198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:11 compute-0 sudo[95196]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:12 compute-0 sudo[95319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnukbhmfdeyosqxqsqgbkfkizvcmbxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852531.394693-463-218455976220867/AnsiballZ_copy.py'
Jan 31 09:42:12 compute-0 sudo[95319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:12 compute-0 python3.9[95321]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852531.394693-463-218455976220867/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:12 compute-0 sudo[95319]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:12 compute-0 sudo[95471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfblgsmbfemulctktcmwnauqdlyxsjqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852532.6208403-480-257726573560682/AnsiballZ_file.py'
Jan 31 09:42:12 compute-0 sudo[95471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:12 compute-0 python3.9[95473]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:13 compute-0 sudo[95471]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:13 compute-0 sudo[95623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxygoyhyxgzfwvstodlvhyvlzrjkrsuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852533.1548355-488-14509611136517/AnsiballZ_file.py'
Jan 31 09:42:13 compute-0 sudo[95623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:13 compute-0 python3.9[95625]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:13 compute-0 sudo[95623]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:13 compute-0 sudo[95775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cssaxgwuatlfkgykjkwfpeuvrylovaed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852533.7759268-496-101831960367711/AnsiballZ_stat.py'
Jan 31 09:42:13 compute-0 sudo[95775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:14 compute-0 python3.9[95777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:14 compute-0 sudo[95775]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:14 compute-0 sudo[95898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypfcouexidbshhbjbxbzzoseqrptrynw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852533.7759268-496-101831960367711/AnsiballZ_copy.py'
Jan 31 09:42:14 compute-0 sudo[95898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:14 compute-0 python3.9[95900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852533.7759268-496-101831960367711/.source.json _original_basename=.e5hpojfr follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:14 compute-0 sudo[95898]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:15 compute-0 python3.9[96050]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:17 compute-0 sudo[96471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkcncpflwtegztogzoxugjnsogawcqrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852536.643949-536-191834564760721/AnsiballZ_container_config_data.py'
Jan 31 09:42:17 compute-0 sudo[96471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:17 compute-0 python3.9[96473]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 09:42:17 compute-0 sudo[96471]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:17 compute-0 sudo[96623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpunsjkskhjkievstdzuvmibbaqvnav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852537.5845828-547-163440507215928/AnsiballZ_container_config_hash.py'
Jan 31 09:42:17 compute-0 sudo[96623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:18 compute-0 python3.9[96625]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:42:18 compute-0 sudo[96623]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:18 compute-0 sudo[96775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxfevsuntmiixykpwallvwxucdlrpqee ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852538.4137623-557-78254004803126/AnsiballZ_edpm_container_manage.py'
Jan 31 09:42:18 compute-0 sudo[96775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:19 compute-0 python3[96777]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:42:19 compute-0 podman[96814]: 2026-01-31 09:42:19.299321398 +0000 UTC m=+0.052345612 container create 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 09:42:19 compute-0 podman[96814]: 2026-01-31 09:42:19.274821975 +0000 UTC m=+0.027846199 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 09:42:19 compute-0 python3[96777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 09:42:19 compute-0 sudo[96775]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:19 compute-0 sudo[97002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdwlkmgqwhrziauqkocdwqlxugppbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852539.5809872-565-269587608426104/AnsiballZ_stat.py'
Jan 31 09:42:19 compute-0 sudo[97002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:20 compute-0 python3.9[97004]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:42:20 compute-0 sudo[97002]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 09:42:20 compute-0 sudo[97156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrqymqxwxsguciuopbnosvpblauemnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852540.2822762-574-145222425077554/AnsiballZ_file.py'
Jan 31 09:42:20 compute-0 sudo[97156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:20 compute-0 python3.9[97158]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:20 compute-0 sudo[97156]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:20 compute-0 sudo[97232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbraazvkklxjgosbwxthdkbgmgtaujsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852540.2822762-574-145222425077554/AnsiballZ_stat.py'
Jan 31 09:42:20 compute-0 sudo[97232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:21 compute-0 python3.9[97234]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:42:21 compute-0 sudo[97232]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:21 compute-0 sudo[97383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qprpbmwbcffslxhtcxsnrikluuscgtfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852541.1102896-574-256734529253739/AnsiballZ_copy.py'
Jan 31 09:42:21 compute-0 sudo[97383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:21 compute-0 python3.9[97385]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769852541.1102896-574-256734529253739/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:21 compute-0 sudo[97383]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:22 compute-0 sudo[97459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugqdcgpwfjjommgonyxmchtympenfww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852541.1102896-574-256734529253739/AnsiballZ_systemd.py'
Jan 31 09:42:22 compute-0 sudo[97459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:22 compute-0 python3.9[97461]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:42:22 compute-0 systemd[1]: Reloading.
Jan 31 09:42:22 compute-0 systemd-rc-local-generator[97487]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:42:22 compute-0 systemd-sysv-generator[97492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:42:22 compute-0 sudo[97459]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:22 compute-0 sudo[97569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xotrkyqfmqzvytipygqnpekapcgdoene ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852541.1102896-574-256734529253739/AnsiballZ_systemd.py'
Jan 31 09:42:22 compute-0 sudo[97569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:23 compute-0 python3.9[97571]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:42:23 compute-0 systemd[1]: Reloading.
Jan 31 09:42:23 compute-0 systemd-rc-local-generator[97596]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:42:23 compute-0 systemd-sysv-generator[97605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:42:23 compute-0 systemd[1]: Starting ovn_controller container...
Jan 31 09:42:23 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 09:42:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1390b7afc07780df8092632671320e7b95ba31fdbf20e50c4942712a8645595/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 09:42:23 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.
Jan 31 09:42:23 compute-0 podman[97612]: 2026-01-31 09:42:23.397620661 +0000 UTC m=+0.128310276 container init 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + sudo -E kolla_set_configs
Jan 31 09:42:23 compute-0 podman[97612]: 2026-01-31 09:42:23.422601387 +0000 UTC m=+0.153291022 container start 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 09:42:23 compute-0 edpm-start-podman-container[97612]: ovn_controller
Jan 31 09:42:23 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 31 09:42:23 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 09:42:23 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 09:42:23 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 31 09:42:23 compute-0 edpm-start-podman-container[97611]: Creating additional drop-in dependency for "ovn_controller" (57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab)
Jan 31 09:42:23 compute-0 systemd[97669]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 09:42:23 compute-0 podman[97634]: 2026-01-31 09:42:23.498966231 +0000 UTC m=+0.065308158 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:42:23 compute-0 systemd[1]: 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab-5f7a9c04371f8ede.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:42:23 compute-0 systemd[1]: 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab-5f7a9c04371f8ede.service: Failed with result 'exit-code'.
Jan 31 09:42:23 compute-0 systemd[1]: Reloading.
Jan 31 09:42:23 compute-0 systemd[97669]: Queued start job for default target Main User Target.
Jan 31 09:42:23 compute-0 systemd[97669]: Created slice User Application Slice.
Jan 31 09:42:23 compute-0 systemd[97669]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 09:42:23 compute-0 systemd[97669]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 09:42:23 compute-0 systemd[97669]: Reached target Paths.
Jan 31 09:42:23 compute-0 systemd[97669]: Reached target Timers.
Jan 31 09:42:23 compute-0 systemd[97669]: Starting D-Bus User Message Bus Socket...
Jan 31 09:42:23 compute-0 systemd[97669]: Starting Create User's Volatile Files and Directories...
Jan 31 09:42:23 compute-0 systemd[97669]: Listening on D-Bus User Message Bus Socket.
Jan 31 09:42:23 compute-0 systemd[97669]: Reached target Sockets.
Jan 31 09:42:23 compute-0 systemd[97669]: Finished Create User's Volatile Files and Directories.
Jan 31 09:42:23 compute-0 systemd[97669]: Reached target Basic System.
Jan 31 09:42:23 compute-0 systemd[97669]: Reached target Main User Target.
Jan 31 09:42:23 compute-0 systemd[97669]: Startup finished in 108ms.
Jan 31 09:42:23 compute-0 systemd-sysv-generator[97722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:42:23 compute-0 systemd-rc-local-generator[97718]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:42:23 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 31 09:42:23 compute-0 systemd[1]: Started ovn_controller container.
Jan 31 09:42:23 compute-0 systemd[1]: Started Session c1 of User root.
Jan 31 09:42:23 compute-0 sudo[97569]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:23 compute-0 ovn_controller[97627]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:42:23 compute-0 ovn_controller[97627]: INFO:__main__:Validating config file
Jan 31 09:42:23 compute-0 ovn_controller[97627]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:42:23 compute-0 ovn_controller[97627]: INFO:__main__:Writing out command to execute
Jan 31 09:42:23 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: ++ cat /run_command
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + ARGS=
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + sudo kolla_copy_cacerts
Jan 31 09:42:23 compute-0 systemd[1]: Started Session c2 of User root.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + [[ ! -n '' ]]
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + . kolla_extend_start
Jan 31 09:42:23 compute-0 ovn_controller[97627]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + umask 0022
Jan 31 09:42:23 compute-0 ovn_controller[97627]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 09:42:23 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.8970] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.8978] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <warn>  [1769852543.8981] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.8989] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.8996] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.8999] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 09:42:23 compute-0 kernel: br-int: entered promiscuous mode
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00019|main|INFO|OVS feature set changed, force recompute.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.9170] manager: (ovn-b6a3de-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 09:42:23 compute-0 ovn_controller[97627]: 2026-01-31T09:42:23Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 09:42:23 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.9315] device (genev_sys_6081): carrier: link connected
Jan 31 09:42:23 compute-0 NetworkManager[56281]: <info>  [1769852543.9320] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 09:42:23 compute-0 systemd-udevd[97764]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:42:23 compute-0 systemd-udevd[97768]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:42:24 compute-0 python3.9[97896]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:42:25 compute-0 sudo[98046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmoggsivqleazdkbgczdbejeofzhrrzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852545.0101008-619-220816205441192/AnsiballZ_stat.py'
Jan 31 09:42:25 compute-0 sudo[98046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:25 compute-0 python3.9[98048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:25 compute-0 sudo[98046]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:25 compute-0 sudo[98169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnsmdgsaaqmufmcbitfcyxfnqbvrgcrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852545.0101008-619-220816205441192/AnsiballZ_copy.py'
Jan 31 09:42:25 compute-0 sudo[98169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:25 compute-0 python3.9[98171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852545.0101008-619-220816205441192/.source.yaml _original_basename=.cbvyh5lk follow=False checksum=2d1ff38d28edf4954791706ff4da3c88f5dc74b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:25 compute-0 sudo[98169]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:26 compute-0 sudo[98321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdvqleysnpblzueeoqtwlxhurduwemx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852546.0741632-634-56001861691376/AnsiballZ_command.py'
Jan 31 09:42:26 compute-0 sudo[98321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:26 compute-0 python3.9[98323]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:42:26 compute-0 ovs-vsctl[98324]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 09:42:26 compute-0 sudo[98321]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:26 compute-0 sudo[98474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnegqbbpuvesxfezvubjmwpiikjgnmvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852546.6539192-642-21066131233857/AnsiballZ_command.py'
Jan 31 09:42:26 compute-0 sudo[98474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:27 compute-0 python3.9[98476]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:42:27 compute-0 ovs-vsctl[98478]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 09:42:27 compute-0 sudo[98474]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:27 compute-0 irqbalance[792]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 09:42:27 compute-0 irqbalance[792]: IRQ 26 affinity is now unmanaged
Jan 31 09:42:27 compute-0 sudo[98629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwasqupejtngdnjsjdqwjqlvozqhmozm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852547.4529321-656-266955058457072/AnsiballZ_command.py'
Jan 31 09:42:27 compute-0 sudo[98629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:27 compute-0 python3.9[98631]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:42:27 compute-0 ovs-vsctl[98632]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 09:42:27 compute-0 sudo[98629]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:28 compute-0 sshd-session[87147]: Connection closed by 192.168.122.30 port 39736
Jan 31 09:42:28 compute-0 sshd-session[87144]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:42:28 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 09:42:28 compute-0 systemd[1]: session-19.scope: Consumed 40.027s CPU time.
Jan 31 09:42:28 compute-0 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Jan 31 09:42:28 compute-0 systemd-logind[795]: Removed session 19.
Jan 31 09:42:33 compute-0 sshd-session[98658]: Accepted publickey for zuul from 192.168.122.30 port 46692 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:42:33 compute-0 systemd-logind[795]: New session 21 of user zuul.
Jan 31 09:42:33 compute-0 systemd[1]: Started Session 21 of User zuul.
Jan 31 09:42:33 compute-0 sshd-session[98658]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:42:34 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 31 09:42:34 compute-0 systemd[97669]: Activating special unit Exit the Session...
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped target Main User Target.
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped target Basic System.
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped target Paths.
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped target Sockets.
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped target Timers.
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 09:42:34 compute-0 systemd[97669]: Closed D-Bus User Message Bus Socket.
Jan 31 09:42:34 compute-0 systemd[97669]: Stopped Create User's Volatile Files and Directories.
Jan 31 09:42:34 compute-0 systemd[97669]: Removed slice User Application Slice.
Jan 31 09:42:34 compute-0 systemd[97669]: Reached target Shutdown.
Jan 31 09:42:34 compute-0 systemd[97669]: Finished Exit the Session.
Jan 31 09:42:34 compute-0 systemd[97669]: Reached target Exit the Session.
Jan 31 09:42:34 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 09:42:34 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 31 09:42:34 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 09:42:34 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 09:42:34 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 09:42:34 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 09:42:34 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 09:42:34 compute-0 python3.9[98813]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:42:35 compute-0 sudo[98967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yitcfjemewxiqithnvhkfqimmcvbzlzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852555.0563521-29-219946051413248/AnsiballZ_file.py'
Jan 31 09:42:35 compute-0 sudo[98967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:35 compute-0 python3.9[98969]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:35 compute-0 sudo[98967]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:36 compute-0 sudo[99119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmdogjwrklafuwqkqgimljuxlzkkbkoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852555.8168993-29-277870941964265/AnsiballZ_file.py'
Jan 31 09:42:36 compute-0 sudo[99119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:36 compute-0 python3.9[99121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:36 compute-0 sudo[99119]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:36 compute-0 sudo[99271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kceiiaqffmfuipdpdumxmtovbovrshsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852556.4347413-29-153385538609318/AnsiballZ_file.py'
Jan 31 09:42:36 compute-0 sudo[99271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:36 compute-0 python3.9[99273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:36 compute-0 sudo[99271]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:37 compute-0 sudo[99423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubnfsvepdcvswbteshukmrppyeesojs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852556.9593892-29-222646758826095/AnsiballZ_file.py'
Jan 31 09:42:37 compute-0 sudo[99423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:37 compute-0 python3.9[99425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:37 compute-0 sudo[99423]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:37 compute-0 sudo[99575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykfhxqoihmmccpkkjqmudawbdjtraqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852557.4578424-29-230246832215374/AnsiballZ_file.py'
Jan 31 09:42:37 compute-0 sudo[99575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:37 compute-0 python3.9[99577]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:37 compute-0 sudo[99575]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:38 compute-0 python3.9[99727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:42:39 compute-0 sudo[99877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhknljoxhvbleaxkuwyxwjnxbzhfyjsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852558.653243-73-28810785984870/AnsiballZ_seboolean.py'
Jan 31 09:42:39 compute-0 sudo[99877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:39 compute-0 python3.9[99879]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 09:42:39 compute-0 sudo[99877]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:40 compute-0 python3.9[100029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:41 compute-0 python3.9[100150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852559.906793-81-226000839423200/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:41 compute-0 python3.9[100300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:42 compute-0 python3.9[100422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852561.35848-96-87664827026955/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:42 compute-0 sudo[100572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngnykrfhkueggpdozbqirjrgxounimlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852562.482329-113-94645376508000/AnsiballZ_setup.py'
Jan 31 09:42:42 compute-0 sudo[100572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:43 compute-0 python3.9[100574]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:42:43 compute-0 sudo[100572]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:43 compute-0 sudo[100656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejeryprnmlkwtexrxyxrnrcqdrwqfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852562.482329-113-94645376508000/AnsiballZ_dnf.py'
Jan 31 09:42:43 compute-0 sudo[100656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:43 compute-0 python3.9[100658]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:42:45 compute-0 sudo[100656]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:45 compute-0 sudo[100809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccfvgtrwpgjqhvvvgnxdkwzpsssksps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852565.2001302-125-91102359033207/AnsiballZ_systemd.py'
Jan 31 09:42:45 compute-0 sudo[100809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:46 compute-0 python3.9[100811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:42:46 compute-0 sudo[100809]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:46 compute-0 python3.9[100964]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:47 compute-0 python3.9[101085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852566.2330608-133-66034854865275/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:47 compute-0 python3.9[101235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:48 compute-0 python3.9[101356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852567.1862297-133-38312173185493/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:50 compute-0 python3.9[101506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:50 compute-0 python3.9[101627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852568.867565-177-74260762705194/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:51 compute-0 python3.9[101777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:51 compute-0 python3.9[101898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852570.6247773-177-2186771858882/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:52 compute-0 python3.9[102048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:42:52 compute-0 sudo[102200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deznlhwlsccqcysomoxfmutrmtsqpkvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852572.3369088-215-31104537130227/AnsiballZ_file.py'
Jan 31 09:42:52 compute-0 sudo[102200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:52 compute-0 python3.9[102202]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:52 compute-0 sudo[102200]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:53 compute-0 sudo[102352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvpmklliadfcpyksvybbibdwkcjfmvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852572.9105122-223-90979592648656/AnsiballZ_stat.py'
Jan 31 09:42:53 compute-0 sudo[102352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:53 compute-0 python3.9[102354]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:53 compute-0 sudo[102352]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:53 compute-0 sudo[102430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opyoujfuogjmbuapszxkhrhftjmdqjyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852572.9105122-223-90979592648656/AnsiballZ_file.py'
Jan 31 09:42:53 compute-0 sudo[102430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:53 compute-0 ovn_controller[97627]: 2026-01-31T09:42:53Z|00025|memory|INFO|16000 kB peak resident set size after 29.7 seconds
Jan 31 09:42:53 compute-0 ovn_controller[97627]: 2026-01-31T09:42:53Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 09:42:53 compute-0 podman[102432]: 2026-01-31 09:42:53.647159617 +0000 UTC m=+0.093179997 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 09:42:53 compute-0 python3.9[102433]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:53 compute-0 sudo[102430]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:54 compute-0 sudo[102608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjgfzasniwmiudbvqmowbogpnilyspay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852573.8904076-223-228596914938625/AnsiballZ_stat.py'
Jan 31 09:42:54 compute-0 sudo[102608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:54 compute-0 python3.9[102610]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:54 compute-0 sudo[102608]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:54 compute-0 sudo[102686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywxvyicjqdxmwcwbhpniwxybspbotnrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852573.8904076-223-228596914938625/AnsiballZ_file.py'
Jan 31 09:42:54 compute-0 sudo[102686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:54 compute-0 python3.9[102688]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:42:54 compute-0 sudo[102686]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:55 compute-0 sudo[102838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqqxekooayuwoliiyorgecvmvpotknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852574.8309526-246-238883580777270/AnsiballZ_file.py'
Jan 31 09:42:55 compute-0 sudo[102838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:55 compute-0 python3.9[102840]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:55 compute-0 sudo[102838]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:55 compute-0 sudo[102990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trydssnqbohzuywleoulgnzjtgraymjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852575.4733903-254-260028158664742/AnsiballZ_stat.py'
Jan 31 09:42:55 compute-0 sudo[102990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:55 compute-0 python3.9[102992]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:55 compute-0 sudo[102990]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:56 compute-0 sudo[103068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndalazfvmkprzqmhaszebvhykfmaqasj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852575.4733903-254-260028158664742/AnsiballZ_file.py'
Jan 31 09:42:56 compute-0 sudo[103068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:56 compute-0 python3.9[103070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:56 compute-0 sudo[103068]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:56 compute-0 sudo[103220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhbnotcvkjfjxqxmrvqhdrzkygjjnvyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852576.4305289-266-133410233941497/AnsiballZ_stat.py'
Jan 31 09:42:56 compute-0 sudo[103220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:56 compute-0 python3.9[103222]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:56 compute-0 sudo[103220]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:57 compute-0 sudo[103298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefssorszrymqstounbxfrrkaediwuhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852576.4305289-266-133410233941497/AnsiballZ_file.py'
Jan 31 09:42:57 compute-0 sudo[103298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:57 compute-0 python3.9[103300]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:57 compute-0 sudo[103298]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:57 compute-0 sudo[103450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgnbdgeyjzymmnhzhkixftdjystbalz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852577.5239017-278-227834851046477/AnsiballZ_systemd.py'
Jan 31 09:42:57 compute-0 sudo[103450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:58 compute-0 python3.9[103452]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:42:58 compute-0 systemd[1]: Reloading.
Jan 31 09:42:58 compute-0 systemd-rc-local-generator[103472]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:42:58 compute-0 systemd-sysv-generator[103475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:42:58 compute-0 sudo[103450]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:58 compute-0 sudo[103639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlftqolfiqpfzenimruvoubudyiuevuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852578.5231423-286-28991573936719/AnsiballZ_stat.py'
Jan 31 09:42:58 compute-0 sudo[103639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:58 compute-0 python3.9[103641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:59 compute-0 sudo[103639]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:59 compute-0 sudo[103717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzoakfguejytylohifesmchrhexsbvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852578.5231423-286-28991573936719/AnsiballZ_file.py'
Jan 31 09:42:59 compute-0 sudo[103717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:59 compute-0 python3.9[103719]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:42:59 compute-0 sudo[103717]: pam_unix(sudo:session): session closed for user root
Jan 31 09:42:59 compute-0 sudo[103869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmoiqemhxadnbevkptlqdovwtcvbpqum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852579.5141833-298-278681947006119/AnsiballZ_stat.py'
Jan 31 09:42:59 compute-0 sudo[103869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:42:59 compute-0 python3.9[103871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:42:59 compute-0 sudo[103869]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:00 compute-0 sudo[103947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkuqptcmeitmwdermazcoonnldacqloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852579.5141833-298-278681947006119/AnsiballZ_file.py'
Jan 31 09:43:00 compute-0 sudo[103947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:00 compute-0 python3.9[103949]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:00 compute-0 sudo[103947]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:00 compute-0 sudo[104099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cojsbgigvdeabvhrficwaohczwxxszfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852580.467826-310-128847419267740/AnsiballZ_systemd.py'
Jan 31 09:43:00 compute-0 sudo[104099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:00 compute-0 python3.9[104101]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:00 compute-0 systemd[1]: Reloading.
Jan 31 09:43:01 compute-0 systemd-rc-local-generator[104130]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:43:01 compute-0 systemd-sysv-generator[104133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:43:01 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 09:43:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 09:43:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 09:43:01 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 09:43:01 compute-0 sudo[104099]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:01 compute-0 sudo[104293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhlssfijtspzqlhhhvaoydjcyetyktzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852581.424608-320-188542861265101/AnsiballZ_file.py'
Jan 31 09:43:01 compute-0 sudo[104293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:01 compute-0 python3.9[104295]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:43:01 compute-0 sudo[104293]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:02 compute-0 sudo[104445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-synmhblortpvcreosdkohbdtenaholth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852581.9603994-328-160071195553486/AnsiballZ_stat.py'
Jan 31 09:43:02 compute-0 sudo[104445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:02 compute-0 python3.9[104447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:43:02 compute-0 sudo[104445]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:02 compute-0 sudo[104568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oggcdnczcujpprduskgvwzxttprpeahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852581.9603994-328-160071195553486/AnsiballZ_copy.py'
Jan 31 09:43:02 compute-0 sudo[104568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:03 compute-0 python3.9[104570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852581.9603994-328-160071195553486/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:43:03 compute-0 sudo[104568]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:03 compute-0 sudo[104720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pucefetzeeswdxebrmqataoyheajoepd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852583.3912425-345-146561468428378/AnsiballZ_file.py'
Jan 31 09:43:03 compute-0 sudo[104720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:03 compute-0 python3.9[104722]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:03 compute-0 sudo[104720]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:04 compute-0 sudo[104872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urkhaeidjeyoqvcnivfuqwvkukirgtjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852583.976456-353-101142335829330/AnsiballZ_file.py'
Jan 31 09:43:04 compute-0 sudo[104872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:04 compute-0 python3.9[104874]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:43:04 compute-0 sudo[104872]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:04 compute-0 sudo[105024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odtdktkuixzvvzzaarlusllnbuvghaha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852584.5935178-361-250961575515906/AnsiballZ_stat.py'
Jan 31 09:43:04 compute-0 sudo[105024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:05 compute-0 python3.9[105026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:43:05 compute-0 sudo[105024]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:05 compute-0 sudo[105147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvfnhmruexnuiajvtlvafxwaocijlcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852584.5935178-361-250961575515906/AnsiballZ_copy.py'
Jan 31 09:43:05 compute-0 sudo[105147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:05 compute-0 python3.9[105149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852584.5935178-361-250961575515906/.source.json _original_basename=.7r679qs0 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:05 compute-0 sudo[105147]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:06 compute-0 python3.9[105299]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:07 compute-0 sudo[105720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clsgqhrhfpczbvrcerkjwosbscyfezon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852587.5307865-401-189166422375074/AnsiballZ_container_config_data.py'
Jan 31 09:43:07 compute-0 sudo[105720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:08 compute-0 python3.9[105722]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 09:43:08 compute-0 sudo[105720]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:08 compute-0 sudo[105872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdevmxjuaadthlatdpdopoxsrrkulylr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852588.450279-412-194509894279378/AnsiballZ_container_config_hash.py'
Jan 31 09:43:08 compute-0 sudo[105872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:09 compute-0 python3.9[105874]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:43:09 compute-0 sudo[105872]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:09 compute-0 sudo[106024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtveezamswojhzvshqcznjmyqjakanc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852589.3049502-422-201351285686346/AnsiballZ_edpm_container_manage.py'
Jan 31 09:43:09 compute-0 sudo[106024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:10 compute-0 python3[106026]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:43:10 compute-0 podman[106065]: 2026-01-31 09:43:10.209299461 +0000 UTC m=+0.054906327 container create 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 09:43:10 compute-0 podman[106065]: 2026-01-31 09:43:10.185683759 +0000 UTC m=+0.031290645 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:43:10 compute-0 python3[106026]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:43:10 compute-0 sudo[106024]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:10 compute-0 sudo[106251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkfyogtlcelnzjlxzekzosezgisuzgyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852590.5099134-430-213188394472044/AnsiballZ_stat.py'
Jan 31 09:43:10 compute-0 sudo[106251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:10 compute-0 python3.9[106253]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:43:10 compute-0 sudo[106251]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:11 compute-0 sudo[106405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjaikriuhkblytpluokecptxoixqusi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852591.2014408-439-274325711032187/AnsiballZ_file.py'
Jan 31 09:43:11 compute-0 sudo[106405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:11 compute-0 python3.9[106407]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:11 compute-0 sudo[106405]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:12 compute-0 sudo[106481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kazzlrxzgsjtvebhmmvmqzggjdwkydxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852591.2014408-439-274325711032187/AnsiballZ_stat.py'
Jan 31 09:43:12 compute-0 sudo[106481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:12 compute-0 python3.9[106483]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:43:12 compute-0 sudo[106481]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:12 compute-0 sudo[106632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqdlpfdkvlkuoqmcewhbtzmlsmkdejue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852592.3276708-439-40358530861027/AnsiballZ_copy.py'
Jan 31 09:43:12 compute-0 sudo[106632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:12 compute-0 python3.9[106634]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769852592.3276708-439-40358530861027/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:12 compute-0 sudo[106632]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:13 compute-0 sudo[106708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwycvffoycvlzmbfroirlmnbcazhnyzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852592.3276708-439-40358530861027/AnsiballZ_systemd.py'
Jan 31 09:43:13 compute-0 sudo[106708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:13 compute-0 python3.9[106710]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:43:13 compute-0 systemd[1]: Reloading.
Jan 31 09:43:13 compute-0 systemd-rc-local-generator[106727]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:43:13 compute-0 systemd-sysv-generator[106738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:43:13 compute-0 sudo[106708]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:13 compute-0 sudo[106819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvlnoejnwwivmmycksudeecpbottshi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852592.3276708-439-40358530861027/AnsiballZ_systemd.py'
Jan 31 09:43:13 compute-0 sudo[106819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:14 compute-0 python3.9[106821]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:14 compute-0 systemd[1]: Reloading.
Jan 31 09:43:14 compute-0 systemd-sysv-generator[106850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:43:14 compute-0 systemd-rc-local-generator[106847]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:43:14 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 09:43:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50fbd1037f9f0c9244eecc45bbca5bf8491c88585743ea37cc0679ee9eaee7e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 09:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50fbd1037f9f0c9244eecc45bbca5bf8491c88585743ea37cc0679ee9eaee7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:43:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.
Jan 31 09:43:14 compute-0 podman[106862]: 2026-01-31 09:43:14.669380538 +0000 UTC m=+0.290366268 container init 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + sudo -E kolla_set_configs
Jan 31 09:43:14 compute-0 podman[106862]: 2026-01-31 09:43:14.707592995 +0000 UTC m=+0.328578685 container start 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Validating config file
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Copying service configuration files
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Writing out command to execute
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: ++ cat /run_command
Jan 31 09:43:14 compute-0 edpm-start-podman-container[106862]: ovn_metadata_agent
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + CMD=neutron-ovn-metadata-agent
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + ARGS=
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + sudo kolla_copy_cacerts
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + [[ ! -n '' ]]
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + . kolla_extend_start
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + umask 0022
Jan 31 09:43:14 compute-0 ovn_metadata_agent[106878]: + exec neutron-ovn-metadata-agent
Jan 31 09:43:14 compute-0 edpm-start-podman-container[106861]: Creating additional drop-in dependency for "ovn_metadata_agent" (1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480)
Jan 31 09:43:14 compute-0 podman[106885]: 2026-01-31 09:43:14.837771048 +0000 UTC m=+0.125840895 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:43:14 compute-0 systemd[1]: Reloading.
Jan 31 09:43:14 compute-0 systemd-rc-local-generator[106950]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:43:14 compute-0 systemd-sysv-generator[106956]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:43:15 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 31 09:43:15 compute-0 sudo[106819]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:15 compute-0 python3.9[107113]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.357 106883 INFO neutron.common.config [-] Logging enabled!
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.357 106883 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.357 106883 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.358 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.358 106883 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.358 106883 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.358 106883 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.358 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.359 106883 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.360 106883 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.360 106883 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.360 106883 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.360 106883 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.360 106883 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.360 106883 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.361 106883 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.362 106883 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.363 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.364 106883 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.365 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.366 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.367 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.368 106883 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.369 106883 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.370 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.371 106883 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.372 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.373 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.374 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.375 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.376 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.377 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.378 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.379 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.380 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.381 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.382 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.383 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.384 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.385 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.386 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.387 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.388 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.389 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.390 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.391 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.392 106883 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.401 106883 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.401 106883 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.401 106883 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.401 106883 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.401 106883 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.427 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 089e34f1-a6ad-49ae-8ce3-e9f7773bc2da (UUID: 089e34f1-a6ad-49ae-8ce3-e9f7773bc2da) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.468 106883 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.469 106883 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.469 106883 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.469 106883 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.472 106883 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.478 106883 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.485 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '089e34f1-a6ad-49ae-8ce3-e9f7773bc2da'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], external_ids={}, name=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, nb_cfg_timestamp=1769852551922, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.486 106883 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f82ea581100>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.487 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.487 106883 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.487 106883 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.487 106883 INFO oslo_service.service [-] Starting 1 workers
Jan 31 09:43:16 compute-0 sudo[107263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qikyfztypbhwwnkxmavdvxupmnsqriai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852596.2711775-484-76510407863335/AnsiballZ_stat.py'
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.492 106883 DEBUG oslo_service.service [-] Started child 107265 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 09:43:16 compute-0 sudo[107263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.494 107265 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-162616'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.495 106883 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp1ulx1w06/privsep.sock']
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.526 107265 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.526 107265 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.526 107265 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.530 107265 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.538 107265 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 09:43:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.544 107265 INFO eventlet.wsgi.server [-] (107265) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 09:43:16 compute-0 python3.9[107266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:43:16 compute-0 sudo[107263]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:16 compute-0 sudo[107393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zitceqlveajwfvyhsirworbofdixbwlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852596.2711775-484-76510407863335/AnsiballZ_copy.py'
Jan 31 09:43:16 compute-0 sudo[107393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:16 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:17.094 106883 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:17.095 106883 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1ulx1w06/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.987 107396 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.990 107396 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.992 107396 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:16.992 107396 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107396
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:17.097 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9da2dc-eb0c-4705-9c2b-e720a4ae411d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:43:17 compute-0 python3.9[107395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852596.2711775-484-76510407863335/.source.yaml _original_basename=.nulbpamv follow=False checksum=9466c8023b3db6321065b9c36992bc414e67640a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:17 compute-0 sudo[107393]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:17.544 107396 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:17.545 107396 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:43:17 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:17.545 107396 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:43:17 compute-0 sshd-session[98661]: Connection closed by 192.168.122.30 port 46692
Jan 31 09:43:17 compute-0 sshd-session[98658]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:43:17 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 31 09:43:17 compute-0 systemd[1]: session-21.scope: Consumed 30.414s CPU time.
Jan 31 09:43:17 compute-0 systemd-logind[795]: Session 21 logged out. Waiting for processes to exit.
Jan 31 09:43:17 compute-0 systemd-logind[795]: Removed session 21.
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.021 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad24556-e861-4957-8c71-a712df9dd9a4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.024 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, column=external_ids, values=({'neutron:ovn-metadata-id': 'c3550adb-cbf6-526f-94d6-ed7f0a8e54d7'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.060 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.067 106883 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.067 106883 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.068 106883 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.068 106883 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.068 106883 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.068 106883 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.069 106883 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.069 106883 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.069 106883 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.069 106883 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.070 106883 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.070 106883 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.070 106883 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.070 106883 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.071 106883 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.071 106883 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.071 106883 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.071 106883 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.072 106883 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.072 106883 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.072 106883 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.072 106883 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.073 106883 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.073 106883 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.073 106883 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.074 106883 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.074 106883 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.074 106883 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.075 106883 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.075 106883 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.075 106883 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.076 106883 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.076 106883 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.076 106883 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.077 106883 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.077 106883 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.077 106883 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.078 106883 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.078 106883 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.078 106883 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.078 106883 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.079 106883 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.079 106883 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.079 106883 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.080 106883 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.080 106883 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.080 106883 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.080 106883 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.081 106883 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.081 106883 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.081 106883 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.081 106883 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.082 106883 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.082 106883 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.082 106883 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.083 106883 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.083 106883 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.083 106883 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.083 106883 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.084 106883 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.084 106883 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.084 106883 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.084 106883 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.085 106883 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.085 106883 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.085 106883 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.086 106883 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.086 106883 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.086 106883 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.087 106883 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.087 106883 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.087 106883 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.087 106883 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.088 106883 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.088 106883 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.088 106883 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.089 106883 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.089 106883 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.089 106883 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.089 106883 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.090 106883 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.090 106883 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.090 106883 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.090 106883 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.091 106883 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.091 106883 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.091 106883 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.092 106883 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.092 106883 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.092 106883 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.092 106883 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.093 106883 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.093 106883 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.093 106883 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.093 106883 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.094 106883 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.094 106883 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.094 106883 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.095 106883 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.095 106883 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.095 106883 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.095 106883 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.096 106883 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.096 106883 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.096 106883 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.097 106883 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.097 106883 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.097 106883 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.097 106883 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.098 106883 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.098 106883 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.098 106883 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.099 106883 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.099 106883 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.099 106883 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.100 106883 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.100 106883 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.100 106883 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.100 106883 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.101 106883 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.101 106883 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.101 106883 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.102 106883 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.102 106883 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.102 106883 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.103 106883 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.103 106883 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.103 106883 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.103 106883 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.104 106883 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.104 106883 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.104 106883 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.104 106883 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.105 106883 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.105 106883 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.105 106883 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.106 106883 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.106 106883 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.106 106883 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.107 106883 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.107 106883 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.107 106883 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.107 106883 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.108 106883 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.108 106883 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.108 106883 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.108 106883 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.109 106883 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.109 106883 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.109 106883 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.110 106883 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.110 106883 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.110 106883 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.110 106883 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.111 106883 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.111 106883 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.111 106883 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.111 106883 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.111 106883 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.112 106883 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.112 106883 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.112 106883 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.112 106883 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.112 106883 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.112 106883 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.113 106883 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.113 106883 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.113 106883 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.113 106883 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.113 106883 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.114 106883 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.114 106883 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.114 106883 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.114 106883 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.114 106883 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.115 106883 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.115 106883 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.115 106883 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.115 106883 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.115 106883 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.116 106883 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.116 106883 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.116 106883 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.116 106883 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.116 106883 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.117 106883 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.117 106883 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.117 106883 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.117 106883 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.117 106883 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.118 106883 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.118 106883 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.118 106883 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.118 106883 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.118 106883 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.119 106883 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.119 106883 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.119 106883 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.119 106883 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.119 106883 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.120 106883 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.120 106883 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.120 106883 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.120 106883 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.120 106883 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.120 106883 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.121 106883 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.121 106883 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.121 106883 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.121 106883 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.121 106883 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.122 106883 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.122 106883 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.122 106883 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.122 106883 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.122 106883 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.123 106883 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.123 106883 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.123 106883 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.123 106883 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.123 106883 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.123 106883 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.124 106883 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.124 106883 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.124 106883 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.124 106883 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.124 106883 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.125 106883 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.125 106883 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.125 106883 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.125 106883 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.125 106883 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.125 106883 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.126 106883 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.126 106883 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.126 106883 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.126 106883 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.126 106883 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.127 106883 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.127 106883 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.127 106883 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.127 106883 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.127 106883 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.128 106883 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.128 106883 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.128 106883 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.128 106883 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.128 106883 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.129 106883 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.129 106883 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.129 106883 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.129 106883 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.129 106883 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.130 106883 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.130 106883 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.130 106883 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.130 106883 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.130 106883 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.131 106883 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.131 106883 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.131 106883 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.131 106883 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.131 106883 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.132 106883 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.132 106883 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.132 106883 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.132 106883 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.132 106883 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.133 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.133 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.133 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.133 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.133 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.134 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.134 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.134 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.134 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.134 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.135 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.135 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.135 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.135 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.135 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.135 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.136 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.136 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.136 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.136 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.136 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.137 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.137 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.137 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.137 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.137 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.138 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.138 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.138 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.138 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.138 106883 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.139 106883 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.139 106883 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.139 106883 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.139 106883 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:43:18 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:43:18.139 106883 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:43:23 compute-0 sshd-session[107425]: Accepted publickey for zuul from 192.168.122.30 port 46402 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:43:23 compute-0 systemd-logind[795]: New session 22 of user zuul.
Jan 31 09:43:23 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 31 09:43:23 compute-0 sshd-session[107425]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:43:23 compute-0 podman[107528]: 2026-01-31 09:43:23.998335209 +0000 UTC m=+0.118256495 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 09:43:24 compute-0 python3.9[107604]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:43:25 compute-0 sudo[107758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfereklcuatjvlntahjodlppdiczmkva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852604.9920974-29-240057417899574/AnsiballZ_command.py'
Jan 31 09:43:25 compute-0 sudo[107758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:25 compute-0 python3.9[107760]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:25 compute-0 sudo[107758]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:26 compute-0 sudo[107922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkdnwzdezptzdkaynkzcvukpdlystymq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852605.9997315-40-74882237279225/AnsiballZ_systemd_service.py'
Jan 31 09:43:26 compute-0 sudo[107922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:26 compute-0 python3.9[107924]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:43:26 compute-0 systemd[1]: Reloading.
Jan 31 09:43:27 compute-0 systemd-sysv-generator[107952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:43:27 compute-0 systemd-rc-local-generator[107944]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:43:27 compute-0 sudo[107922]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:27 compute-0 python3.9[108109]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:43:27 compute-0 network[108126]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:43:27 compute-0 network[108127]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:43:27 compute-0 network[108128]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:43:30 compute-0 sudo[108387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boxagftdpzwuknoblfgbpahhizmoylxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852610.3501022-59-10015933154309/AnsiballZ_systemd_service.py'
Jan 31 09:43:30 compute-0 sudo[108387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:30 compute-0 python3.9[108389]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:30 compute-0 sudo[108387]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:31 compute-0 sudo[108540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shnurwlgklmerphuuuulwhcpjuzbkdsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852611.0976732-59-168552508812811/AnsiballZ_systemd_service.py'
Jan 31 09:43:31 compute-0 sudo[108540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:31 compute-0 python3.9[108542]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:31 compute-0 sudo[108540]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:32 compute-0 sudo[108693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viigraqcngbbtuyzchhlpxykqaizgwgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852611.7896802-59-218563396227751/AnsiballZ_systemd_service.py'
Jan 31 09:43:32 compute-0 sudo[108693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:32 compute-0 python3.9[108695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:32 compute-0 sudo[108693]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:32 compute-0 sudo[108846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpfvlneiqkhqzxtmjdguxgylhvrdcgnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852612.4519598-59-99835692909/AnsiballZ_systemd_service.py'
Jan 31 09:43:32 compute-0 sudo[108846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:32 compute-0 python3.9[108848]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:32 compute-0 sudo[108846]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:33 compute-0 sudo[108999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmwnsweydpyeifvwlmrygpaduhuyuny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852613.0820158-59-51090264001270/AnsiballZ_systemd_service.py'
Jan 31 09:43:33 compute-0 sudo[108999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:33 compute-0 python3.9[109001]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:33 compute-0 sudo[108999]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:34 compute-0 sudo[109152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlheihmyizyhxstpdfmscroiioiixgqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852613.765444-59-5388510820943/AnsiballZ_systemd_service.py'
Jan 31 09:43:34 compute-0 sudo[109152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:34 compute-0 python3.9[109154]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:34 compute-0 sudo[109152]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:34 compute-0 sudo[109305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqsmgupjzhrufwtbncspoonclrlbwtrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852614.4203947-59-103267265878320/AnsiballZ_systemd_service.py'
Jan 31 09:43:34 compute-0 sudo[109305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:34 compute-0 python3.9[109307]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:43:35 compute-0 sudo[109305]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:35 compute-0 sudo[109458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjybspyedmkavamkuoluyslfxqmmyokl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852615.2932837-111-170723615206405/AnsiballZ_file.py'
Jan 31 09:43:35 compute-0 sudo[109458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:35 compute-0 python3.9[109460]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:35 compute-0 sudo[109458]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:36 compute-0 sshd-session[109509]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 09:43:36 compute-0 sshd-session[109509]: Connection reset by 176.120.22.52 port 34184
Jan 31 09:43:36 compute-0 sudo[109612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niipaactpmijhjuxixigqgcgtywawtjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852616.0607991-111-183446031901461/AnsiballZ_file.py'
Jan 31 09:43:36 compute-0 sudo[109612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:36 compute-0 python3.9[109614]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:36 compute-0 sudo[109612]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:36 compute-0 sudo[109764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavxfjsblarjdklkssyxvvbbimsgdauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852616.6315737-111-132125292843367/AnsiballZ_file.py'
Jan 31 09:43:36 compute-0 sudo[109764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:37 compute-0 python3.9[109766]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:37 compute-0 sudo[109764]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:37 compute-0 sudo[109916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pccjdabklopliqqbabkogtqksrhaqsyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852617.2342885-111-183249163344236/AnsiballZ_file.py'
Jan 31 09:43:37 compute-0 sudo[109916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:37 compute-0 python3.9[109918]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:37 compute-0 sudo[109916]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:37 compute-0 sudo[110068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvmvvsfyudfgboagyumzleobcczqpgtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852617.7541256-111-277095799497564/AnsiballZ_file.py'
Jan 31 09:43:37 compute-0 sudo[110068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:38 compute-0 python3.9[110070]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:38 compute-0 sudo[110068]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:38 compute-0 sudo[110220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyobbgakdzewcgzpefgaqspfkfymacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852618.2957635-111-121396425764494/AnsiballZ_file.py'
Jan 31 09:43:38 compute-0 sudo[110220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:38 compute-0 python3.9[110222]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:38 compute-0 sudo[110220]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:39 compute-0 sudo[110372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajrzdkfxmormiybmvlrifbieouppycta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852618.9933236-111-67083399407558/AnsiballZ_file.py'
Jan 31 09:43:39 compute-0 sudo[110372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:39 compute-0 python3.9[110374]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:39 compute-0 sudo[110372]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:39 compute-0 sudo[110524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfnxlvsrbvgjdkkvmdvntrmkkhvkhapz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852619.6087015-161-108098554166651/AnsiballZ_file.py'
Jan 31 09:43:39 compute-0 sudo[110524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:40 compute-0 python3.9[110526]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:40 compute-0 sudo[110524]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:40 compute-0 sudo[110676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvhgfqejlnedjgwfylvfrquamexpapjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852620.2052128-161-82720410658605/AnsiballZ_file.py'
Jan 31 09:43:40 compute-0 sudo[110676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:40 compute-0 python3.9[110678]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:40 compute-0 sudo[110676]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:40 compute-0 sudo[110828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdcpbifxhmkojlibtwnmssiwcwddrnaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852620.7513502-161-121802049840012/AnsiballZ_file.py'
Jan 31 09:43:40 compute-0 sudo[110828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:41 compute-0 python3.9[110830]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:41 compute-0 sudo[110828]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:41 compute-0 sudo[110980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbcbshsuauhywvyvzbcyatfggmcykjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852621.3212636-161-8572722226580/AnsiballZ_file.py'
Jan 31 09:43:41 compute-0 sudo[110980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:41 compute-0 python3.9[110982]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:41 compute-0 sudo[110980]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:42 compute-0 sudo[111132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrmmrpnmkqmastgztraowgattrbpvumx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852621.83341-161-277153596958331/AnsiballZ_file.py'
Jan 31 09:43:42 compute-0 sudo[111132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:42 compute-0 python3.9[111134]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:42 compute-0 sudo[111132]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:42 compute-0 sudo[111284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galzxjhbgvbvcrninwwkfgyvosjoleal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852622.3634279-161-20532000040855/AnsiballZ_file.py'
Jan 31 09:43:42 compute-0 sudo[111284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:42 compute-0 python3.9[111286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:42 compute-0 sudo[111284]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:43 compute-0 sudo[111436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqufqkwvyospesiingbltxqjgxjjnwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852623.0664241-161-41192383539583/AnsiballZ_file.py'
Jan 31 09:43:43 compute-0 sudo[111436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:43 compute-0 python3.9[111438]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:43:43 compute-0 sudo[111436]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:43 compute-0 sudo[111588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iginlahboweaxlhoopqcoqyqbwcdtlbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852623.6738575-212-91430697247305/AnsiballZ_command.py'
Jan 31 09:43:43 compute-0 sudo[111588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:44 compute-0 python3.9[111590]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:44 compute-0 sudo[111588]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:44 compute-0 python3.9[111742]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:43:44 compute-0 podman[111743]: 2026-01-31 09:43:44.953904577 +0000 UTC m=+0.079229169 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 31 09:43:45 compute-0 sudo[111911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alfrcjapcyowcidvxoyvytciczjlyxea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852625.1488547-230-273643579458402/AnsiballZ_systemd_service.py'
Jan 31 09:43:45 compute-0 sudo[111911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:45 compute-0 python3.9[111913]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:43:46 compute-0 systemd[1]: Reloading.
Jan 31 09:43:46 compute-0 systemd-sysv-generator[111944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:43:46 compute-0 systemd-rc-local-generator[111941]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:43:46 compute-0 sudo[111911]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:46 compute-0 sudo[112098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjidrypuowhymtpjmbdisqgkzwezmlln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852626.5325718-238-143176920522592/AnsiballZ_command.py'
Jan 31 09:43:46 compute-0 sudo[112098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:47 compute-0 python3.9[112100]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:47 compute-0 sudo[112098]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:47 compute-0 sudo[112251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idecixookvgvaksinebagrwnguzlavqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852627.148424-238-81380350041904/AnsiballZ_command.py'
Jan 31 09:43:47 compute-0 sudo[112251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:47 compute-0 python3.9[112253]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:48 compute-0 sudo[112251]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:49 compute-0 sudo[112404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaibnqhutkrgkwvnvnwgkovhhcvohuaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852628.9780083-238-10465938647990/AnsiballZ_command.py'
Jan 31 09:43:49 compute-0 sudo[112404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:49 compute-0 python3.9[112406]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:49 compute-0 sudo[112404]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:49 compute-0 sudo[112557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aksyplwqiwksynvsuhhcvolmrlmppuee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852629.5986426-238-34447711028499/AnsiballZ_command.py'
Jan 31 09:43:49 compute-0 sudo[112557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:49 compute-0 python3.9[112559]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:50 compute-0 sudo[112557]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:50 compute-0 sudo[112710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfeeudfbazibbayprcuerawqcosrbgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852630.145137-238-170398071160201/AnsiballZ_command.py'
Jan 31 09:43:50 compute-0 sudo[112710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:50 compute-0 python3.9[112712]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:50 compute-0 sudo[112710]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:50 compute-0 sudo[112863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmamfungswughozvqlwyxiihfukcbtst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852630.7259743-238-90289589657306/AnsiballZ_command.py'
Jan 31 09:43:50 compute-0 sudo[112863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:51 compute-0 python3.9[112865]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:51 compute-0 sudo[112863]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:51 compute-0 sudo[113016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhywfxebdcqqsnbhlgbuguilnbmwuwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852631.2863586-238-937598843676/AnsiballZ_command.py'
Jan 31 09:43:51 compute-0 sudo[113016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:51 compute-0 python3.9[113018]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:43:51 compute-0 sudo[113016]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:52 compute-0 sudo[113169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlhklwdbgtevzhlhpntjuhlwvlmgwxup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852632.4055285-292-190640346567379/AnsiballZ_getent.py'
Jan 31 09:43:52 compute-0 sudo[113169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:53 compute-0 python3.9[113171]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 09:43:53 compute-0 sudo[113169]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:53 compute-0 sudo[113322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgidymbssdatbaktarsxeuvfxvmnkaiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852633.243966-300-178942180980037/AnsiballZ_group.py'
Jan 31 09:43:53 compute-0 sudo[113322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:53 compute-0 python3.9[113324]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 09:43:53 compute-0 groupadd[113325]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 09:43:53 compute-0 groupadd[113325]: group added to /etc/gshadow: name=libvirt
Jan 31 09:43:53 compute-0 groupadd[113325]: new group: name=libvirt, GID=42473
Jan 31 09:43:53 compute-0 sudo[113322]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:54 compute-0 sudo[113491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjmsdthdftfwcowoenpaqhbpvqczrply ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852634.080573-308-101812619875489/AnsiballZ_user.py'
Jan 31 09:43:54 compute-0 sudo[113491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:54 compute-0 podman[113454]: 2026-01-31 09:43:54.627598657 +0000 UTC m=+0.118768947 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 09:43:54 compute-0 python3.9[113497]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 09:43:54 compute-0 useradd[113509]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 09:43:54 compute-0 sudo[113491]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:55 compute-0 sudo[113665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukamjlcfxggnmmevhpccjykvbiffiutx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852635.1645975-319-194398512929708/AnsiballZ_setup.py'
Jan 31 09:43:55 compute-0 sudo[113665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:55 compute-0 python3.9[113667]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:43:55 compute-0 sudo[113665]: pam_unix(sudo:session): session closed for user root
Jan 31 09:43:56 compute-0 sudo[113749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwxqrtfdqgrwaukaoacfdmdpedvwqbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852635.1645975-319-194398512929708/AnsiballZ_dnf.py'
Jan 31 09:43:56 compute-0 sudo[113749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:43:56 compute-0 python3.9[113751]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:44:15 compute-0 podman[113941]: 2026-01-31 09:44:15.940440458 +0000 UTC m=+0.062447800 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:44:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:44:16.402 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:44:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:44:16.404 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:44:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:44:16.404 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:44:21 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:44:21 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:44:24 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 09:44:24 compute-0 podman[113969]: 2026-01-31 09:44:24.961929183 +0000 UTC m=+0.072616644 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 09:44:30 compute-0 kernel: SELinux:  Converting 2767 SID table entries...
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:44:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:44:46 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 09:44:47 compute-0 podman[118409]: 2026-01-31 09:44:47.216168 +0000 UTC m=+0.336338142 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 09:44:55 compute-0 podman[127030]: 2026-01-31 09:44:55.982900483 +0000 UTC m=+0.114811114 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:45:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:45:16.403 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:45:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:45:16.403 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:45:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:45:16.404 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:45:17 compute-0 podman[130924]: 2026-01-31 09:45:17.932597012 +0000 UTC m=+0.057747066 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 09:45:18 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 09:45:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 09:45:19 compute-0 groupadd[130952]: group added to /etc/group: name=dnsmasq, GID=993
Jan 31 09:45:19 compute-0 groupadd[130952]: group added to /etc/gshadow: name=dnsmasq
Jan 31 09:45:19 compute-0 groupadd[130952]: new group: name=dnsmasq, GID=993
Jan 31 09:45:19 compute-0 useradd[130959]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 09:45:19 compute-0 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Jan 31 09:45:19 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 09:45:19 compute-0 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Jan 31 09:45:20 compute-0 groupadd[130972]: group added to /etc/group: name=clevis, GID=992
Jan 31 09:45:20 compute-0 groupadd[130972]: group added to /etc/gshadow: name=clevis
Jan 31 09:45:20 compute-0 groupadd[130972]: new group: name=clevis, GID=992
Jan 31 09:45:20 compute-0 useradd[130979]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 09:45:20 compute-0 usermod[130989]: add 'clevis' to group 'tss'
Jan 31 09:45:20 compute-0 usermod[130989]: add 'clevis' to shadow group 'tss'
Jan 31 09:45:22 compute-0 polkitd[43703]: Reloading rules
Jan 31 09:45:22 compute-0 polkitd[43703]: Collecting garbage unconditionally...
Jan 31 09:45:22 compute-0 polkitd[43703]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 09:45:22 compute-0 polkitd[43703]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 09:45:22 compute-0 polkitd[43703]: Finished loading, compiling and executing 3 rules
Jan 31 09:45:22 compute-0 polkitd[43703]: Reloading rules
Jan 31 09:45:22 compute-0 polkitd[43703]: Collecting garbage unconditionally...
Jan 31 09:45:22 compute-0 polkitd[43703]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 09:45:22 compute-0 polkitd[43703]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 09:45:22 compute-0 polkitd[43703]: Finished loading, compiling and executing 3 rules
Jan 31 09:45:23 compute-0 groupadd[131179]: group added to /etc/group: name=ceph, GID=167
Jan 31 09:45:23 compute-0 groupadd[131179]: group added to /etc/gshadow: name=ceph
Jan 31 09:45:23 compute-0 groupadd[131179]: new group: name=ceph, GID=167
Jan 31 09:45:23 compute-0 useradd[131185]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 31 09:45:26 compute-0 sshd[1005]: Received signal 15; terminating.
Jan 31 09:45:26 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 09:45:26 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 09:45:26 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 09:45:26 compute-0 systemd[1]: sshd.service: Consumed 1.213s CPU time, read 32.0K from disk, written 0B to disk.
Jan 31 09:45:26 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 09:45:26 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 31 09:45:26 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 09:45:26 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 09:45:26 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 09:45:26 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 31 09:45:26 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 31 09:45:26 compute-0 sshd[131714]: Server listening on 0.0.0.0 port 22.
Jan 31 09:45:26 compute-0 sshd[131714]: Server listening on :: port 22.
Jan 31 09:45:26 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 31 09:45:26 compute-0 podman[131702]: 2026-01-31 09:45:26.186889267 +0000 UTC m=+0.140818797 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 09:45:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:45:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:45:27 compute-0 systemd[1]: Reloading.
Jan 31 09:45:27 compute-0 systemd-rc-local-generator[131983]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:27 compute-0 systemd-sysv-generator[131990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:45:31 compute-0 sudo[113749]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:32 compute-0 sudo[138778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewquvkdzpsfzitdbnbxhktjpwjpjcab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852731.61977-331-60082876403096/AnsiballZ_systemd.py'
Jan 31 09:45:32 compute-0 sudo[138778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:32 compute-0 python3.9[138804]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:45:32 compute-0 systemd[1]: Reloading.
Jan 31 09:45:32 compute-0 systemd-sysv-generator[139468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:32 compute-0 systemd-rc-local-generator[139464]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:32 compute-0 sudo[138778]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:33 compute-0 sudo[140371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmktesjrtlpjjpncivkzreomcadkelm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852732.967468-331-71100106108106/AnsiballZ_systemd.py'
Jan 31 09:45:33 compute-0 sudo[140371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:33 compute-0 python3.9[140397]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:45:33 compute-0 systemd[1]: Reloading.
Jan 31 09:45:33 compute-0 systemd-rc-local-generator[140735]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:33 compute-0 systemd-sysv-generator[140738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:45:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:45:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.158s CPU time.
Jan 31 09:45:33 compute-0 systemd[1]: run-rd5f0f50c01ba47229a9d2c384737fa67.service: Deactivated successfully.
Jan 31 09:45:33 compute-0 sudo[140371]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:34 compute-0 sudo[140900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbdshvhstbjeuspiwxlpgkqohtwytrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852733.8310852-331-244531716961178/AnsiballZ_systemd.py'
Jan 31 09:45:34 compute-0 sudo[140900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:34 compute-0 python3.9[140902]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:45:34 compute-0 systemd[1]: Reloading.
Jan 31 09:45:34 compute-0 systemd-sysv-generator[140935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:34 compute-0 systemd-rc-local-generator[140931]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:34 compute-0 sudo[140900]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:35 compute-0 sudo[141091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbdgobkvwepynjgivadktwkjvsurrzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852734.7956655-331-277571632554824/AnsiballZ_systemd.py'
Jan 31 09:45:35 compute-0 sudo[141091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:35 compute-0 python3.9[141093]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:45:35 compute-0 systemd[1]: Reloading.
Jan 31 09:45:35 compute-0 systemd-rc-local-generator[141121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:35 compute-0 systemd-sysv-generator[141124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:35 compute-0 sudo[141091]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:36 compute-0 sudo[141282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tajklmsuuexrelipgizezfucedrnvghh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852735.763778-360-4535375115094/AnsiballZ_systemd.py'
Jan 31 09:45:36 compute-0 sudo[141282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:36 compute-0 python3.9[141284]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:36 compute-0 systemd[1]: Reloading.
Jan 31 09:45:36 compute-0 systemd-sysv-generator[141318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:36 compute-0 systemd-rc-local-generator[141315]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:36 compute-0 sudo[141282]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:36 compute-0 sudo[141473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdqxrvegtybdkxktnkmnbvnwxfstzjql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852736.750104-360-63723741402029/AnsiballZ_systemd.py'
Jan 31 09:45:36 compute-0 sudo[141473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:37 compute-0 python3.9[141475]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:37 compute-0 systemd[1]: Reloading.
Jan 31 09:45:37 compute-0 systemd-rc-local-generator[141505]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:37 compute-0 systemd-sysv-generator[141509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:37 compute-0 sudo[141473]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:37 compute-0 sudo[141663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijpffqfulctwfwllurseqespmmownuac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852737.724061-360-252732691588897/AnsiballZ_systemd.py'
Jan 31 09:45:37 compute-0 sudo[141663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:38 compute-0 python3.9[141665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:39 compute-0 systemd[1]: Reloading.
Jan 31 09:45:39 compute-0 systemd-rc-local-generator[141695]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:39 compute-0 systemd-sysv-generator[141698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:39 compute-0 sudo[141663]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:39 compute-0 sudo[141853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uualsillovcfurajujrnjmhrevtmrfqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852739.704384-360-75006561894693/AnsiballZ_systemd.py'
Jan 31 09:45:39 compute-0 sudo[141853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:40 compute-0 python3.9[141855]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:40 compute-0 sudo[141853]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:40 compute-0 sudo[142008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eohjuqnnoetzudemiapubcnuyfhauvbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852740.4443154-360-16098295554353/AnsiballZ_systemd.py'
Jan 31 09:45:40 compute-0 sudo[142008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:40 compute-0 python3.9[142010]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:41 compute-0 systemd[1]: Reloading.
Jan 31 09:45:41 compute-0 systemd-rc-local-generator[142042]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:41 compute-0 systemd-sysv-generator[142047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:41 compute-0 sudo[142008]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:41 compute-0 sudo[142199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlcnbujvtwalbokfpchfzevgmwjgqsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852741.425698-396-137756291045260/AnsiballZ_systemd.py'
Jan 31 09:45:41 compute-0 sudo[142199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:42 compute-0 python3.9[142201]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 09:45:42 compute-0 systemd[1]: Reloading.
Jan 31 09:45:42 compute-0 systemd-rc-local-generator[142225]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:45:42 compute-0 systemd-sysv-generator[142228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:45:42 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 09:45:42 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 09:45:42 compute-0 sudo[142199]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:42 compute-0 sudo[142391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohhzougrvqyxvzrgyqtalhyuurdmmhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852742.6500833-404-180623442304336/AnsiballZ_systemd.py'
Jan 31 09:45:42 compute-0 sudo[142391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:43 compute-0 python3.9[142393]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:43 compute-0 sudo[142391]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:43 compute-0 sudo[142546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehhhcfotvlkclipsdkvbzzerroubytiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852743.4250975-404-55045471576611/AnsiballZ_systemd.py'
Jan 31 09:45:43 compute-0 sudo[142546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:43 compute-0 python3.9[142548]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:44 compute-0 sudo[142546]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:44 compute-0 sudo[142701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhbsjtgshajsfonjguhmoxejggcepmov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852744.1241076-404-138995773079334/AnsiballZ_systemd.py'
Jan 31 09:45:44 compute-0 sudo[142701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:44 compute-0 python3.9[142703]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:44 compute-0 sudo[142701]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:45 compute-0 sudo[142856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwszamqpjehwthngzspbiomxvthmsbbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852744.8431551-404-187815574063302/AnsiballZ_systemd.py'
Jan 31 09:45:45 compute-0 sudo[142856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:45 compute-0 python3.9[142858]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:45 compute-0 sudo[142856]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:45 compute-0 sudo[143011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpenpczuwaahursjycpkiilfhvgfsyms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852745.517022-404-160166896171012/AnsiballZ_systemd.py'
Jan 31 09:45:45 compute-0 sudo[143011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:46 compute-0 python3.9[143013]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:46 compute-0 sudo[143011]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:46 compute-0 sudo[143166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncmomjcyarrqmcrlimxzmnylpkrzdxol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852746.187657-404-149631309438533/AnsiballZ_systemd.py'
Jan 31 09:45:46 compute-0 sudo[143166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:46 compute-0 python3.9[143168]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:46 compute-0 sudo[143166]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:47 compute-0 sudo[143321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeufgxibhkxostvnlhpelmltdcrfezto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852746.8481264-404-95492573727159/AnsiballZ_systemd.py'
Jan 31 09:45:47 compute-0 sudo[143321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:47 compute-0 python3.9[143323]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:47 compute-0 sudo[143321]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:47 compute-0 sudo[143476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oupoacjouaaxlksxgzbpiowjzwgwejfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852747.603942-404-254326454969262/AnsiballZ_systemd.py'
Jan 31 09:45:47 compute-0 sudo[143476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:48 compute-0 python3.9[143478]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:48 compute-0 sudo[143476]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:48 compute-0 podman[143480]: 2026-01-31 09:45:48.285085582 +0000 UTC m=+0.072264963 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:45:48 compute-0 sudo[143648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqovpbittdpukqluqhjlvjnbmopjqftp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852748.3836656-404-21735935458346/AnsiballZ_systemd.py'
Jan 31 09:45:48 compute-0 sudo[143648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:48 compute-0 python3.9[143650]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:48 compute-0 sudo[143648]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:49 compute-0 sudo[143803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlfhzlkutgtzljwrhvrremwgexprhvom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852749.0603368-404-269416079282567/AnsiballZ_systemd.py'
Jan 31 09:45:49 compute-0 sudo[143803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:49 compute-0 python3.9[143805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:49 compute-0 sudo[143803]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:50 compute-0 sudo[143958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apecwsmngzvwsujhpiwsxxfjrfauckid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852749.7422585-404-223024352498355/AnsiballZ_systemd.py'
Jan 31 09:45:50 compute-0 sudo[143958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:50 compute-0 python3.9[143960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:50 compute-0 sudo[143958]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:50 compute-0 sudo[144113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xezramkyfmgtmqghwdzeklodzxwveyoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852750.4585447-404-6362582287173/AnsiballZ_systemd.py'
Jan 31 09:45:50 compute-0 sudo[144113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:50 compute-0 python3.9[144115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:51 compute-0 sudo[144113]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:51 compute-0 sudo[144268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsbbayvnmwolmqeilcspqxrvgppzmsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852751.1514187-404-127379937192350/AnsiballZ_systemd.py'
Jan 31 09:45:51 compute-0 sudo[144268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:51 compute-0 python3.9[144270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:51 compute-0 sudo[144268]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:52 compute-0 sudo[144423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdnynudywlspnqhedyrirzymmmrcrdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852751.90803-404-73820381833672/AnsiballZ_systemd.py'
Jan 31 09:45:52 compute-0 sudo[144423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:52 compute-0 python3.9[144425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 09:45:52 compute-0 sudo[144423]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:53 compute-0 sudo[144578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyljrrjxmqfdogbwlcijdgvhuarpengh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852752.8848352-506-252035531531343/AnsiballZ_file.py'
Jan 31 09:45:53 compute-0 sudo[144578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:53 compute-0 python3.9[144580]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:45:53 compute-0 sudo[144578]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:53 compute-0 sudo[144730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scxrwlutpvxzmncmknncxzmfqhvpwfhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852753.4739547-506-222491920134615/AnsiballZ_file.py'
Jan 31 09:45:53 compute-0 sudo[144730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:53 compute-0 python3.9[144732]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:45:53 compute-0 sudo[144730]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:54 compute-0 sudo[144882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqoabtklschfxfthixgyfxoplfyemlqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852754.04748-506-53085407643531/AnsiballZ_file.py'
Jan 31 09:45:54 compute-0 sudo[144882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:54 compute-0 python3.9[144884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:45:54 compute-0 sudo[144882]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:54 compute-0 sudo[145034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsrjkimkjidewrfrnszwbxddlgovdhpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852754.6509035-506-12656074324372/AnsiballZ_file.py'
Jan 31 09:45:54 compute-0 sudo[145034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:55 compute-0 python3.9[145036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:45:55 compute-0 sudo[145034]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:55 compute-0 sudo[145186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkikvorfaffwtuhmqfsfhpprksedffq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852755.246922-506-274726694947747/AnsiballZ_file.py'
Jan 31 09:45:55 compute-0 sudo[145186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:55 compute-0 python3.9[145188]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:45:55 compute-0 sudo[145186]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:55 compute-0 sudo[145338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhcxexltrmbtnjfiwyfbktgtujwvnkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852755.7946239-506-173883514083480/AnsiballZ_file.py'
Jan 31 09:45:55 compute-0 sudo[145338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:56 compute-0 python3.9[145340]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:45:56 compute-0 sudo[145338]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:56 compute-0 podman[145341]: 2026-01-31 09:45:56.293164399 +0000 UTC m=+0.072462329 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:45:56 compute-0 python3.9[145516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:45:57 compute-0 sudo[145666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pubpyuzkbhknpncgnihejvontxyeevyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852757.1309364-557-268830795571981/AnsiballZ_stat.py'
Jan 31 09:45:57 compute-0 sudo[145666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:57 compute-0 python3.9[145668]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:45:58 compute-0 sudo[145666]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:58 compute-0 sudo[145791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olftkecuvycyhgwgixdhuybkplhiwwxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852757.1309364-557-268830795571981/AnsiballZ_copy.py'
Jan 31 09:45:58 compute-0 sudo[145791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:58 compute-0 python3.9[145793]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852757.1309364-557-268830795571981/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:45:58 compute-0 sudo[145791]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:59 compute-0 sudo[145943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lslxyiyybiwcedkdzgofzwxpzbsxmahw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852758.8733778-557-160059030795518/AnsiballZ_stat.py'
Jan 31 09:45:59 compute-0 sudo[145943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:59 compute-0 python3.9[145945]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:45:59 compute-0 sudo[145943]: pam_unix(sudo:session): session closed for user root
Jan 31 09:45:59 compute-0 sudo[146068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywbixzpkncemdcxkqgjkmsbqkstnzbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852758.8733778-557-160059030795518/AnsiballZ_copy.py'
Jan 31 09:45:59 compute-0 sudo[146068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:45:59 compute-0 python3.9[146070]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852758.8733778-557-160059030795518/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:45:59 compute-0 sudo[146068]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:00 compute-0 sudo[146220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtscpkzjoivlbxzxqwarzqzpubsrfxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852760.0419514-557-258451626949172/AnsiballZ_stat.py'
Jan 31 09:46:00 compute-0 sudo[146220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:00 compute-0 python3.9[146222]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:00 compute-0 sudo[146220]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:00 compute-0 sudo[146345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkqtfcwfdrxjkuywkzgwkucjxqxlqfil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852760.0419514-557-258451626949172/AnsiballZ_copy.py'
Jan 31 09:46:00 compute-0 sudo[146345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:00 compute-0 python3.9[146347]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852760.0419514-557-258451626949172/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:01 compute-0 sudo[146345]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:01 compute-0 sudo[146497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgkhforiuulfwsnyirdmibxbjekxtbho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852761.329628-557-52060747955317/AnsiballZ_stat.py'
Jan 31 09:46:01 compute-0 sudo[146497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:01 compute-0 python3.9[146499]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:01 compute-0 sudo[146497]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:02 compute-0 sudo[146622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvthacnaqcxuyafyqnpnvdvvsettmewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852761.329628-557-52060747955317/AnsiballZ_copy.py'
Jan 31 09:46:02 compute-0 sudo[146622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:02 compute-0 python3.9[146624]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852761.329628-557-52060747955317/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:02 compute-0 sudo[146622]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:02 compute-0 sudo[146774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iygxxecfkreeipddrupztwbljptccoke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852762.4330878-557-68347044976204/AnsiballZ_stat.py'
Jan 31 09:46:02 compute-0 sudo[146774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:02 compute-0 python3.9[146776]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:02 compute-0 sudo[146774]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:03 compute-0 sudo[146899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzlhtyycakfozeydkcggsvrdwqcmznlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852762.4330878-557-68347044976204/AnsiballZ_copy.py'
Jan 31 09:46:03 compute-0 sudo[146899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:03 compute-0 python3.9[146901]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852762.4330878-557-68347044976204/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:03 compute-0 sudo[146899]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:03 compute-0 sudo[147051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzphcgyetwypgctswinoeplkwnlsozvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852763.6277795-557-137115378064855/AnsiballZ_stat.py'
Jan 31 09:46:03 compute-0 sudo[147051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:04 compute-0 python3.9[147053]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:04 compute-0 sudo[147051]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:04 compute-0 sudo[147176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdrfddqmqubujraxwjekueqtwyonmqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852763.6277795-557-137115378064855/AnsiballZ_copy.py'
Jan 31 09:46:04 compute-0 sudo[147176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:04 compute-0 python3.9[147178]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852763.6277795-557-137115378064855/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:04 compute-0 sudo[147176]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:05 compute-0 sudo[147328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhspjvsbdfkdhwwnlzazibugrefhhwkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852765.0916948-557-199618427747153/AnsiballZ_stat.py'
Jan 31 09:46:05 compute-0 sudo[147328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:05 compute-0 python3.9[147330]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:05 compute-0 sudo[147328]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:05 compute-0 sudo[147451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbpewawbtostfkytxtytfixsjdnkhrnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852765.0916948-557-199618427747153/AnsiballZ_copy.py'
Jan 31 09:46:05 compute-0 sudo[147451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:06 compute-0 python3.9[147453]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852765.0916948-557-199618427747153/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:06 compute-0 sudo[147451]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:06 compute-0 sudo[147603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbdyuueqpljcuoifyfhjppsrzyanziut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852766.2304492-557-154555658842295/AnsiballZ_stat.py'
Jan 31 09:46:06 compute-0 sudo[147603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:06 compute-0 python3.9[147605]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:06 compute-0 sudo[147603]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:07 compute-0 sudo[147728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfywkflfewuyzjosqstdzhtzslcorev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852766.2304492-557-154555658842295/AnsiballZ_copy.py'
Jan 31 09:46:07 compute-0 sudo[147728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:07 compute-0 python3.9[147730]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769852766.2304492-557-154555658842295/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:07 compute-0 sudo[147728]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:07 compute-0 sudo[147880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygeqodpkxajfltystdyejgxxiztkljxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852767.4930913-670-237096337440016/AnsiballZ_command.py'
Jan 31 09:46:07 compute-0 sudo[147880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:07 compute-0 python3.9[147882]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 09:46:07 compute-0 sudo[147880]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:08 compute-0 sudo[148033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyctqvkavizqzrgjejjeceilblnzwkqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852768.1957524-679-43350289296182/AnsiballZ_file.py'
Jan 31 09:46:08 compute-0 sudo[148033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:08 compute-0 python3.9[148035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:08 compute-0 sudo[148033]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:09 compute-0 sudo[148185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igczjslcsmfymjzymoelycxidirodibd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852768.839428-679-131007299040867/AnsiballZ_file.py'
Jan 31 09:46:09 compute-0 sudo[148185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:09 compute-0 python3.9[148187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:09 compute-0 sudo[148185]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:09 compute-0 sudo[148337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odygsakzhmskbsfhdjuvxebhxhzufjts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852769.5788167-679-91424441827504/AnsiballZ_file.py'
Jan 31 09:46:09 compute-0 sudo[148337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:09 compute-0 python3.9[148339]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:10 compute-0 sudo[148337]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:10 compute-0 sudo[148489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhfxksqknqdfqqhepwlhlvgocmgcszm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852770.1422844-679-101728477982926/AnsiballZ_file.py'
Jan 31 09:46:10 compute-0 sudo[148489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:10 compute-0 python3.9[148491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:10 compute-0 sudo[148489]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:10 compute-0 sudo[148641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onwljrdhxzzixfwcknduskgjrpbwnrgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852770.7279248-679-232439176212655/AnsiballZ_file.py'
Jan 31 09:46:10 compute-0 sudo[148641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:11 compute-0 python3.9[148643]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:11 compute-0 sudo[148641]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:11 compute-0 sudo[148793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdmaasmcjozymolvezftumkkforablc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852771.2835777-679-163447456513757/AnsiballZ_file.py'
Jan 31 09:46:11 compute-0 sudo[148793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:11 compute-0 python3.9[148795]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:11 compute-0 sudo[148793]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:12 compute-0 sudo[148945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsxfjiwkvvcdayscqvxaqxumcpdxkhov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852771.8089712-679-189506775135423/AnsiballZ_file.py'
Jan 31 09:46:12 compute-0 sudo[148945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:12 compute-0 python3.9[148947]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:12 compute-0 sudo[148945]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:12 compute-0 sudo[149097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyiiryrpcevztqfjdpqeannvydiecpmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852772.3942115-679-150412039092100/AnsiballZ_file.py'
Jan 31 09:46:12 compute-0 sudo[149097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:12 compute-0 python3.9[149099]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:12 compute-0 sudo[149097]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:13 compute-0 sudo[149249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfcthwaoweeklrvxryoftkrhwqyuvpjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852772.9273322-679-126846052637526/AnsiballZ_file.py'
Jan 31 09:46:13 compute-0 sudo[149249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:13 compute-0 python3.9[149251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:13 compute-0 sudo[149249]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:14 compute-0 sudo[149401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyzkadchblyfkkeahwyegeokdbwfoyez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852773.5096703-679-52162250505342/AnsiballZ_file.py'
Jan 31 09:46:14 compute-0 sudo[149401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:14 compute-0 python3.9[149403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:14 compute-0 sudo[149401]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:14 compute-0 sudo[149553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwekbydyiiajihrycqiosafnpahukpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852774.34798-679-277892571325701/AnsiballZ_file.py'
Jan 31 09:46:14 compute-0 sudo[149553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:14 compute-0 python3.9[149555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:14 compute-0 sudo[149553]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:15 compute-0 sudo[149705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaxgveiiwuvfjjuolezxdlfprnijuxzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852774.9356856-679-82492113178386/AnsiballZ_file.py'
Jan 31 09:46:15 compute-0 sudo[149705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:15 compute-0 python3.9[149707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:15 compute-0 sudo[149705]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:15 compute-0 sudo[149857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtdruezwlrmswhhcyeyqjxedgbtjgknt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852775.453435-679-179358347197987/AnsiballZ_file.py'
Jan 31 09:46:15 compute-0 sudo[149857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:15 compute-0 python3.9[149859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:15 compute-0 sudo[149857]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:16 compute-0 sudo[150009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylzfdomgfvqvynvcpzxyqodvwfbxeajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852775.9770648-679-192147061985721/AnsiballZ_file.py'
Jan 31 09:46:16 compute-0 sudo[150009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:46:16.403 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:46:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:46:16.404 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:46:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:46:16.404 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:46:16 compute-0 python3.9[150011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:16 compute-0 sudo[150009]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:17 compute-0 sudo[150161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwftifyeiluebnaunhpqsuayokjjrra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852776.6645932-778-236916363537419/AnsiballZ_stat.py'
Jan 31 09:46:17 compute-0 sudo[150161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:17 compute-0 python3.9[150163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:17 compute-0 sudo[150161]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:17 compute-0 sudo[150284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huoeqartpgznynvmxumdzkgtmvpbuzko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852776.6645932-778-236916363537419/AnsiballZ_copy.py'
Jan 31 09:46:17 compute-0 sudo[150284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:17 compute-0 python3.9[150286]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852776.6645932-778-236916363537419/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:17 compute-0 sudo[150284]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:18 compute-0 sudo[150436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwcbkzcjvlpuxmfhdyultlmivskfpub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852777.9601483-778-234110541234682/AnsiballZ_stat.py'
Jan 31 09:46:18 compute-0 sudo[150436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:18 compute-0 python3.9[150438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:18 compute-0 sudo[150436]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:18 compute-0 sudo[150570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwrzqnwucpzofrasktestbzkowbjuef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852777.9601483-778-234110541234682/AnsiballZ_copy.py'
Jan 31 09:46:18 compute-0 sudo[150570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:18 compute-0 podman[150533]: 2026-01-31 09:46:18.743935221 +0000 UTC m=+0.048022691 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 09:46:18 compute-0 python3.9[150579]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852777.9601483-778-234110541234682/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:18 compute-0 sudo[150570]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:19 compute-0 sudo[150729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxeqrtnlplugvgvtksipxgngppljwdve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852779.3662047-778-262126049395605/AnsiballZ_stat.py'
Jan 31 09:46:19 compute-0 sudo[150729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:19 compute-0 python3.9[150731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:19 compute-0 sudo[150729]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:20 compute-0 sudo[150852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjyfhtapnmonbopehpxgutagugulybx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852779.3662047-778-262126049395605/AnsiballZ_copy.py'
Jan 31 09:46:20 compute-0 sudo[150852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:20 compute-0 python3.9[150854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852779.3662047-778-262126049395605/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:20 compute-0 sudo[150852]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:20 compute-0 sudo[151004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukfhdkutnsuqpigpnkeloofqxpjgfeop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852780.5758758-778-107132076156774/AnsiballZ_stat.py'
Jan 31 09:46:20 compute-0 sudo[151004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:21 compute-0 python3.9[151006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:21 compute-0 sudo[151004]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:21 compute-0 sudo[151127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdoxidyinonwrkqtlpkqtmrosmcufwkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852780.5758758-778-107132076156774/AnsiballZ_copy.py'
Jan 31 09:46:21 compute-0 sudo[151127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:21 compute-0 python3.9[151129]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852780.5758758-778-107132076156774/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:21 compute-0 sudo[151127]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:22 compute-0 sudo[151279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sioglqlrsptwvvscujjefrqcilrxwuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852781.7729986-778-251237465853530/AnsiballZ_stat.py'
Jan 31 09:46:22 compute-0 sudo[151279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:22 compute-0 python3.9[151281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:22 compute-0 sudo[151279]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:22 compute-0 sudo[151402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddrusljzeaqmmwkrpofubgctueofwwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852781.7729986-778-251237465853530/AnsiballZ_copy.py'
Jan 31 09:46:22 compute-0 sudo[151402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:23 compute-0 python3.9[151404]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852781.7729986-778-251237465853530/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:23 compute-0 sudo[151402]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:23 compute-0 sudo[151554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzknculrejgfaxvmckrufpznzuqfutpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852783.1480083-778-51778195567202/AnsiballZ_stat.py'
Jan 31 09:46:23 compute-0 sudo[151554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:23 compute-0 python3.9[151556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:23 compute-0 sudo[151554]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:24 compute-0 sudo[151677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxhdjrtwulqooksigbtynzmkgsthhwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852783.1480083-778-51778195567202/AnsiballZ_copy.py'
Jan 31 09:46:24 compute-0 sudo[151677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:24 compute-0 python3.9[151679]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852783.1480083-778-51778195567202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:24 compute-0 sudo[151677]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:24 compute-0 sudo[151829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozedemzuwdtiwvuijfonaktvigwwatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852784.371719-778-149951680350985/AnsiballZ_stat.py'
Jan 31 09:46:24 compute-0 sudo[151829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:24 compute-0 python3.9[151831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:24 compute-0 sudo[151829]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:25 compute-0 sudo[151952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilvopjyzfopfxohibjetggrvcsdpxugv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852784.371719-778-149951680350985/AnsiballZ_copy.py'
Jan 31 09:46:25 compute-0 sudo[151952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:25 compute-0 python3.9[151954]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852784.371719-778-149951680350985/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:25 compute-0 sudo[151952]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:25 compute-0 sudo[152104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iecpfevgflcwkijanwdhozhfsgepkzdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852785.67982-778-11616966126548/AnsiballZ_stat.py'
Jan 31 09:46:25 compute-0 sudo[152104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:26 compute-0 python3.9[152106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:26 compute-0 sudo[152104]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:26 compute-0 sudo[152244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxgmdqeaxxlqrpmroeglcvwbotcpftrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852785.67982-778-11616966126548/AnsiballZ_copy.py'
Jan 31 09:46:26 compute-0 sudo[152244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:26 compute-0 podman[152201]: 2026-01-31 09:46:26.537028369 +0000 UTC m=+0.098479447 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 09:46:26 compute-0 python3.9[152249]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852785.67982-778-11616966126548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:26 compute-0 sudo[152244]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:27 compute-0 sudo[152405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfmaqmlqrtqpmwlritpbxswzgugsufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852786.8118494-778-42751334589305/AnsiballZ_stat.py'
Jan 31 09:46:27 compute-0 sudo[152405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:27 compute-0 python3.9[152407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:27 compute-0 sudo[152405]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:27 compute-0 sudo[152528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebhotszmptqkbkjuarvcezbdehyzmorp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852786.8118494-778-42751334589305/AnsiballZ_copy.py'
Jan 31 09:46:27 compute-0 sudo[152528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:27 compute-0 python3.9[152530]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852786.8118494-778-42751334589305/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:27 compute-0 sudo[152528]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:28 compute-0 sudo[152680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqwwoluscxgrtbrjdoakndmqlzyehpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852787.8917665-778-195228692399244/AnsiballZ_stat.py'
Jan 31 09:46:28 compute-0 sudo[152680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:28 compute-0 python3.9[152682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:28 compute-0 sudo[152680]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:28 compute-0 sudo[152803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhoznrvjdldyjhgnfctggfpegqmtknca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852787.8917665-778-195228692399244/AnsiballZ_copy.py'
Jan 31 09:46:28 compute-0 sudo[152803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:28 compute-0 python3.9[152805]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852787.8917665-778-195228692399244/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:28 compute-0 sudo[152803]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:29 compute-0 sudo[152955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fncyseprsoryphlwuggmtdwoskcfmprz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852789.003843-778-226673064920922/AnsiballZ_stat.py'
Jan 31 09:46:29 compute-0 sudo[152955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:29 compute-0 python3.9[152957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:29 compute-0 sudo[152955]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:29 compute-0 sudo[153078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsvmbpabtejhbksowshycxogooxgwmnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852789.003843-778-226673064920922/AnsiballZ_copy.py'
Jan 31 09:46:29 compute-0 sudo[153078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:29 compute-0 python3.9[153080]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852789.003843-778-226673064920922/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:29 compute-0 sudo[153078]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:30 compute-0 sudo[153230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdfeckgplckvwvkxqamumzqzkfzlyqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852790.0550995-778-53841193328516/AnsiballZ_stat.py'
Jan 31 09:46:30 compute-0 sudo[153230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:30 compute-0 python3.9[153232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:30 compute-0 sudo[153230]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:30 compute-0 sudo[153353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvguytdkhxqjhuqdllygqgnqbosehuss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852790.0550995-778-53841193328516/AnsiballZ_copy.py'
Jan 31 09:46:30 compute-0 sudo[153353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:31 compute-0 python3.9[153355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852790.0550995-778-53841193328516/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:31 compute-0 sudo[153353]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:31 compute-0 sudo[153505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjoxqzopbmlakecjkdbsqjixxmaogvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852791.1594-778-83181260584332/AnsiballZ_stat.py'
Jan 31 09:46:31 compute-0 sudo[153505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:31 compute-0 python3.9[153507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:31 compute-0 sudo[153505]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:31 compute-0 sudo[153628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wewfxvdpqoqygvkpngbobkmnpyykmzue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852791.1594-778-83181260584332/AnsiballZ_copy.py'
Jan 31 09:46:31 compute-0 sudo[153628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:32 compute-0 python3.9[153630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852791.1594-778-83181260584332/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:32 compute-0 sudo[153628]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:32 compute-0 sudo[153780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tknhbfdjwbzmxdtdphysfxzrnbqsluqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852792.3322968-778-160587461240193/AnsiballZ_stat.py'
Jan 31 09:46:32 compute-0 sudo[153780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:32 compute-0 python3.9[153782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:32 compute-0 sudo[153780]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:33 compute-0 sudo[153903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bltgchtikmdqgknidjnibnmmsknvfjdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852792.3322968-778-160587461240193/AnsiballZ_copy.py'
Jan 31 09:46:33 compute-0 sudo[153903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:33 compute-0 python3.9[153905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852792.3322968-778-160587461240193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:33 compute-0 sudo[153903]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:33 compute-0 python3.9[154055]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:46:34 compute-0 sudo[154208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgbfkjyktsxgotvsasmxnhyautsdxdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852794.1719928-984-17461969377651/AnsiballZ_seboolean.py'
Jan 31 09:46:34 compute-0 sudo[154208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:34 compute-0 python3.9[154210]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 09:46:35 compute-0 sudo[154208]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:36 compute-0 sudo[154364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qffzbeoprlmovahnqyfljlycneancgzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852796.150946-992-137223737143431/AnsiballZ_copy.py'
Jan 31 09:46:36 compute-0 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 09:46:36 compute-0 sudo[154364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:36 compute-0 python3.9[154366]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:36 compute-0 sudo[154364]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:36 compute-0 sudo[154516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoplxurvacwtkvbycftvnwgebyndgvzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852796.7323983-992-249814170896505/AnsiballZ_copy.py'
Jan 31 09:46:36 compute-0 sudo[154516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:37 compute-0 python3.9[154518]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:37 compute-0 sudo[154516]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:37 compute-0 sudo[154668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvnedwrspfmuthcwfbmhnlibblmujbur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852797.2924297-992-271695937137226/AnsiballZ_copy.py'
Jan 31 09:46:37 compute-0 sudo[154668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:37 compute-0 python3.9[154670]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:37 compute-0 sudo[154668]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:38 compute-0 sudo[154820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shgnxnnnaulsyoxnhailjxipkaohdzcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852798.0941675-992-276754531544868/AnsiballZ_copy.py'
Jan 31 09:46:38 compute-0 sudo[154820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:38 compute-0 python3.9[154822]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:38 compute-0 sudo[154820]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:38 compute-0 sudo[154972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzwsnbsmngbbfjsklxrnhhccpujtupv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852798.6358812-992-77788910818986/AnsiballZ_copy.py'
Jan 31 09:46:38 compute-0 sudo[154972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:39 compute-0 python3.9[154974]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:39 compute-0 sudo[154972]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:39 compute-0 sudo[155124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoaknfbbhldmrpsonbavtzoakxorsyjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852799.289552-1028-231217629933732/AnsiballZ_copy.py'
Jan 31 09:46:39 compute-0 sudo[155124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:39 compute-0 python3.9[155126]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:39 compute-0 sudo[155124]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:40 compute-0 sudo[155276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldjujvqhwjudilcckzceqwoptmmluhsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852799.912277-1028-70514505572664/AnsiballZ_copy.py'
Jan 31 09:46:40 compute-0 sudo[155276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:40 compute-0 python3.9[155278]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:40 compute-0 sudo[155276]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:40 compute-0 sudo[155428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asalbgncinjmlxxwdbbpvjlbavpnuyde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852800.449609-1028-7974286562314/AnsiballZ_copy.py'
Jan 31 09:46:40 compute-0 sudo[155428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:40 compute-0 python3.9[155430]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:40 compute-0 sudo[155428]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:41 compute-0 sudo[155580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhusftfgafnmigczqjzustccjjtrlbzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852800.9480975-1028-98651845448080/AnsiballZ_copy.py'
Jan 31 09:46:41 compute-0 sudo[155580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:41 compute-0 python3.9[155582]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:41 compute-0 sudo[155580]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:41 compute-0 sudo[155732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aolxptacssbmubfvnicszqzytplzrbjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852801.5684946-1028-229451890466547/AnsiballZ_copy.py'
Jan 31 09:46:41 compute-0 sudo[155732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:42 compute-0 python3.9[155734]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:42 compute-0 sudo[155732]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:42 compute-0 sudo[155884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skuzuqkcwilzsijjoyggipliawhaujdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852802.1684537-1064-122691755708739/AnsiballZ_systemd.py'
Jan 31 09:46:42 compute-0 sudo[155884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:42 compute-0 python3.9[155886]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:46:42 compute-0 systemd[1]: Reloading.
Jan 31 09:46:42 compute-0 systemd-rc-local-generator[155914]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:46:42 compute-0 systemd-sysv-generator[155918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:46:42 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 09:46:43 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 09:46:43 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 09:46:43 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 09:46:43 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 31 09:46:43 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 31 09:46:43 compute-0 sudo[155884]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:43 compute-0 sudo[156078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojqczxnecxjzztwssogcyhijenypqnog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852803.2353442-1064-67439649096507/AnsiballZ_systemd.py'
Jan 31 09:46:43 compute-0 sudo[156078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:43 compute-0 python3.9[156080]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:46:43 compute-0 systemd[1]: Reloading.
Jan 31 09:46:43 compute-0 systemd-rc-local-generator[156099]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:46:43 compute-0 systemd-sysv-generator[156102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:46:44 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 09:46:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 09:46:44 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 09:46:44 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 09:46:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 09:46:44 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 09:46:44 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 09:46:44 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 09:46:44 compute-0 sudo[156078]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:44 compute-0 sudo[156293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhbwynnwujbbyiqqhmvnrgnyyfpmcipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852804.181497-1064-133245696205273/AnsiballZ_systemd.py'
Jan 31 09:46:44 compute-0 sudo[156293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:44 compute-0 python3.9[156295]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:46:44 compute-0 systemd[1]: Reloading.
Jan 31 09:46:44 compute-0 systemd-rc-local-generator[156315]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:46:44 compute-0 systemd-sysv-generator[156321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:46:45 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 09:46:45 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 09:46:45 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 09:46:45 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 09:46:45 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 09:46:45 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 09:46:45 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 09:46:45 compute-0 sudo[156293]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:45 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 09:46:45 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 09:46:45 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 09:46:45 compute-0 sudo[156512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmgskefjcupeotdlcuwdhlmfmtfshxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852805.4159107-1064-239322161818923/AnsiballZ_systemd.py'
Jan 31 09:46:45 compute-0 sudo[156512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:46 compute-0 python3.9[156514]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:46:46 compute-0 systemd[1]: Reloading.
Jan 31 09:46:46 compute-0 systemd-rc-local-generator[156543]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:46:46 compute-0 systemd-sysv-generator[156546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:46:46 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 09:46:46 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 09:46:46 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 09:46:46 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 09:46:46 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 09:46:46 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 09:46:46 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 09:46:46 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 09:46:46 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 09:46:46 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 09:46:46 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 09:46:46 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 09:46:46 compute-0 sudo[156512]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:46 compute-0 setroubleshoot[156332]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 59e499a3-06ce-444b-b2cf-6ade1d60b56f
Jan 31 09:46:46 compute-0 setroubleshoot[156332]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 09:46:46 compute-0 setroubleshoot[156332]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 59e499a3-06ce-444b-b2cf-6ade1d60b56f
Jan 31 09:46:46 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:46:46 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:46:46 compute-0 setroubleshoot[156332]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 09:46:46 compute-0 sudo[156731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leiknfbczbzjbkkwrdojcoqpwvdhxpkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852806.4961-1064-171744228267171/AnsiballZ_systemd.py'
Jan 31 09:46:46 compute-0 sudo[156731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:47 compute-0 python3.9[156733]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:46:47 compute-0 systemd[1]: Reloading.
Jan 31 09:46:47 compute-0 systemd-sysv-generator[156764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:46:47 compute-0 systemd-rc-local-generator[156761]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:46:47 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 09:46:47 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 09:46:47 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 09:46:47 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 09:46:47 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 09:46:47 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 09:46:47 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 31 09:46:47 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 31 09:46:47 compute-0 sudo[156731]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:47 compute-0 sudo[156943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczriiqhltkqjwvyerelyrhrffgcegml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852807.6264174-1101-239044796629914/AnsiballZ_file.py'
Jan 31 09:46:47 compute-0 sudo[156943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:48 compute-0 python3.9[156945]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:48 compute-0 sudo[156943]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:48 compute-0 sudo[157095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbffukwdvfrwlgfovnbjarjzjfpxlzoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852808.2224228-1109-116560056765791/AnsiballZ_find.py'
Jan 31 09:46:48 compute-0 sudo[157095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:48 compute-0 python3.9[157097]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:46:48 compute-0 sudo[157095]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:48 compute-0 podman[157122]: 2026-01-31 09:46:48.935097406 +0000 UTC m=+0.058706711 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:46:49 compute-0 sudo[157267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edslnhkjmpklqxwsznsctztakmdplcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852809.0036027-1123-237262750761278/AnsiballZ_stat.py'
Jan 31 09:46:49 compute-0 sudo[157267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:49 compute-0 python3.9[157269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:49 compute-0 sudo[157267]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:49 compute-0 sudo[157390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylptuvcjqqlsqkoqoqxnywpffykznqpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852809.0036027-1123-237262750761278/AnsiballZ_copy.py'
Jan 31 09:46:49 compute-0 sudo[157390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:50 compute-0 python3.9[157392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852809.0036027-1123-237262750761278/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:50 compute-0 sudo[157390]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:50 compute-0 sudo[157542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrwkazvyhkolpfmyxmfhxlemyxigqnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852810.4888902-1139-113188784541524/AnsiballZ_file.py'
Jan 31 09:46:50 compute-0 sudo[157542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:50 compute-0 python3.9[157544]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:50 compute-0 sudo[157542]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:51 compute-0 sudo[157694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqrhwaghvvvsxokxosuxdgatjuhekgas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852811.158499-1147-1222445697810/AnsiballZ_stat.py'
Jan 31 09:46:51 compute-0 sudo[157694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:51 compute-0 python3.9[157696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:51 compute-0 sudo[157694]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:51 compute-0 sudo[157772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcuvaoegkknrdjfhqdbxghcxpzdnfuuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852811.158499-1147-1222445697810/AnsiballZ_file.py'
Jan 31 09:46:51 compute-0 sudo[157772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:52 compute-0 python3.9[157774]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:52 compute-0 sudo[157772]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:52 compute-0 sudo[157924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qadneiowlapiktuojwvxuitsnwfkzygw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852812.2001765-1159-269830791577157/AnsiballZ_stat.py'
Jan 31 09:46:52 compute-0 sudo[157924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:52 compute-0 python3.9[157926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:52 compute-0 sudo[157924]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:52 compute-0 sudo[158002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvrfchqwoponejqsgwqdrdngabmhizae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852812.2001765-1159-269830791577157/AnsiballZ_file.py'
Jan 31 09:46:52 compute-0 sudo[158002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:52 compute-0 python3.9[158004]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wr7_8kbc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:52 compute-0 sudo[158002]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:53 compute-0 sudo[158154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnkwxdifwkulezatxsnfacisgtpnghq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852813.1390169-1171-109759519537416/AnsiballZ_stat.py'
Jan 31 09:46:53 compute-0 sudo[158154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:53 compute-0 python3.9[158156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:53 compute-0 sudo[158154]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:53 compute-0 sudo[158232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfikxsjrwniwafbdquzicrlicyjsmloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852813.1390169-1171-109759519537416/AnsiballZ_file.py'
Jan 31 09:46:53 compute-0 sudo[158232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:53 compute-0 python3.9[158234]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:53 compute-0 sudo[158232]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:54 compute-0 sudo[158384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieyyewfcgqkmqqhtxuyjvnzevqxpjdbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852814.1728525-1184-259508255884511/AnsiballZ_command.py'
Jan 31 09:46:54 compute-0 sudo[158384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:54 compute-0 python3.9[158386]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:46:54 compute-0 sudo[158384]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:55 compute-0 sudo[158537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebvxatorkransmmlvewlzlycbmfvrrvr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852814.8422887-1192-47135116363588/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 09:46:55 compute-0 sudo[158537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:55 compute-0 python3[158539]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 09:46:55 compute-0 sudo[158537]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:55 compute-0 sudo[158689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhskofbraumtesdlmczqzznsbauculnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852815.637451-1200-170224568112148/AnsiballZ_stat.py'
Jan 31 09:46:55 compute-0 sudo[158689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:56 compute-0 python3.9[158691]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:56 compute-0 sudo[158689]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:56 compute-0 sudo[158767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzlvexaaykvcfimlfmyhooohkdldllyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852815.637451-1200-170224568112148/AnsiballZ_file.py'
Jan 31 09:46:56 compute-0 sudo[158767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:56 compute-0 python3.9[158769]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:56 compute-0 sudo[158767]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:56 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 09:46:56 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 09:46:56 compute-0 podman[158794]: 2026-01-31 09:46:56.72063866 +0000 UTC m=+0.069024330 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:46:56 compute-0 sudo[158946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufqmgirbalcgyqdefbaboptrwxkgfnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852816.677014-1212-237837471088925/AnsiballZ_stat.py'
Jan 31 09:46:56 compute-0 sudo[158946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:57 compute-0 python3.9[158948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:57 compute-0 sudo[158946]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:57 compute-0 sudo[159071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prcugpxswmezrbyweoeoxidaccucnvtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852816.677014-1212-237837471088925/AnsiballZ_copy.py'
Jan 31 09:46:57 compute-0 sudo[159071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:57 compute-0 python3.9[159073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852816.677014-1212-237837471088925/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:57 compute-0 sudo[159071]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:58 compute-0 sudo[159223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbabcfvuealxfscqeqauhdbjlhxmxtrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852817.8685095-1227-154781883650728/AnsiballZ_stat.py'
Jan 31 09:46:58 compute-0 sudo[159223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:58 compute-0 python3.9[159225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:58 compute-0 sudo[159223]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:58 compute-0 sudo[159301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqrghogsknrczsyafujkwdnqwikfnltq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852817.8685095-1227-154781883650728/AnsiballZ_file.py'
Jan 31 09:46:58 compute-0 sudo[159301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:58 compute-0 python3.9[159303]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:58 compute-0 sudo[159301]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:59 compute-0 sudo[159453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnaldhxmuhtseniewhtbwwttqsghjgax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852818.9362142-1239-112643230949611/AnsiballZ_stat.py'
Jan 31 09:46:59 compute-0 sudo[159453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:59 compute-0 python3.9[159455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:46:59 compute-0 sudo[159453]: pam_unix(sudo:session): session closed for user root
Jan 31 09:46:59 compute-0 sudo[159531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqfrxtlxginrekeezglfjceavjphzyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852818.9362142-1239-112643230949611/AnsiballZ_file.py'
Jan 31 09:46:59 compute-0 sudo[159531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:46:59 compute-0 python3.9[159533]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:46:59 compute-0 sudo[159531]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:00 compute-0 sudo[159683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yebdtgpnkdjxxmbicqkabhajpufzepcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852819.9399958-1251-35578596277186/AnsiballZ_stat.py'
Jan 31 09:47:00 compute-0 sudo[159683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:00 compute-0 python3.9[159685]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:00 compute-0 sudo[159683]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:00 compute-0 sudo[159808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmyzngsgvybkglnyzddzxafskdmxufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852819.9399958-1251-35578596277186/AnsiballZ_copy.py'
Jan 31 09:47:00 compute-0 sudo[159808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:00 compute-0 python3.9[159810]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769852819.9399958-1251-35578596277186/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:00 compute-0 sudo[159808]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:01 compute-0 sudo[159960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-birnkywvhdhgrzzmaxoqlxtrcjshorrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852821.1337168-1266-14085055307127/AnsiballZ_file.py'
Jan 31 09:47:01 compute-0 sudo[159960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:01 compute-0 python3.9[159962]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:01 compute-0 sudo[159960]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:01 compute-0 sudo[160112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnjpdhdlggwxcuwmzfzdqzvylirmzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852821.7333302-1274-149959203086745/AnsiballZ_command.py'
Jan 31 09:47:01 compute-0 sudo[160112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:02 compute-0 python3.9[160114]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:02 compute-0 sudo[160112]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:02 compute-0 sudo[160267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gckuwkgwgjqzyflohvqkillzdresuxqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852822.3664699-1282-197926170189451/AnsiballZ_blockinfile.py'
Jan 31 09:47:02 compute-0 sudo[160267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:02 compute-0 python3.9[160269]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:02 compute-0 sudo[160267]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:03 compute-0 sudo[160419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlnmzvnqjgmijgajttrcydppkfvhswjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852823.1687007-1291-20144506910216/AnsiballZ_command.py'
Jan 31 09:47:03 compute-0 sudo[160419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:03 compute-0 python3.9[160421]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:03 compute-0 sudo[160419]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:03 compute-0 sudo[160572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeuayjxizriakjormfshrzncwoifdbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852823.7207909-1299-102541201061726/AnsiballZ_stat.py'
Jan 31 09:47:03 compute-0 sudo[160572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:04 compute-0 python3.9[160574]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:47:04 compute-0 sudo[160572]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:04 compute-0 sudo[160726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moqoppmhcbaonivxrdrdqwultpgrnzjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852824.2715065-1307-17889408304687/AnsiballZ_command.py'
Jan 31 09:47:04 compute-0 sudo[160726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:04 compute-0 python3.9[160728]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:04 compute-0 sudo[160726]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:05 compute-0 sudo[160881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giktgrwgcrtqweaabezoccwxqvfksqzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852824.8520987-1315-148705749615020/AnsiballZ_file.py'
Jan 31 09:47:05 compute-0 sudo[160881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:05 compute-0 python3.9[160883]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:05 compute-0 sudo[160881]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:05 compute-0 sudo[161033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdxtnwrctvifxldjyylrjhxmxyihbeqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852825.3781433-1323-102686347491969/AnsiballZ_stat.py'
Jan 31 09:47:05 compute-0 sudo[161033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:05 compute-0 python3.9[161035]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:05 compute-0 sudo[161033]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:06 compute-0 sudo[161156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqhvrnweasaplnpamzgkusewjxtbjwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852825.3781433-1323-102686347491969/AnsiballZ_copy.py'
Jan 31 09:47:06 compute-0 sudo[161156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:06 compute-0 python3.9[161158]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852825.3781433-1323-102686347491969/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:06 compute-0 sudo[161156]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:06 compute-0 sudo[161308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjnxvlpeszdbikiootsgkjlgrdzhfuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852826.449995-1338-22862901553249/AnsiballZ_stat.py'
Jan 31 09:47:06 compute-0 sudo[161308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:06 compute-0 python3.9[161310]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:06 compute-0 sudo[161308]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:07 compute-0 sudo[161431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywzdjocsngacssufwhianqeiywgtjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852826.449995-1338-22862901553249/AnsiballZ_copy.py'
Jan 31 09:47:07 compute-0 sudo[161431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:07 compute-0 python3.9[161433]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852826.449995-1338-22862901553249/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:07 compute-0 sudo[161431]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:07 compute-0 sudo[161583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikvwdwratccccrajyekyddxsplarczj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852827.5541034-1353-157175248541197/AnsiballZ_stat.py'
Jan 31 09:47:07 compute-0 sudo[161583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:08 compute-0 python3.9[161585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:08 compute-0 sudo[161583]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:08 compute-0 sudo[161706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jylgnviyxhbtcmunljykwvmdgxnumrra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852827.5541034-1353-157175248541197/AnsiballZ_copy.py'
Jan 31 09:47:08 compute-0 sudo[161706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:08 compute-0 python3.9[161708]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852827.5541034-1353-157175248541197/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:08 compute-0 sudo[161706]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:08 compute-0 sudo[161858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzeutaawsvrvducxoyuywymnwwxddafp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852828.658296-1368-270648753159936/AnsiballZ_systemd.py'
Jan 31 09:47:08 compute-0 sudo[161858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:09 compute-0 python3.9[161860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:47:09 compute-0 systemd[1]: Reloading.
Jan 31 09:47:09 compute-0 systemd-sysv-generator[161888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:47:09 compute-0 systemd-rc-local-generator[161885]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:47:09 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 09:47:09 compute-0 sudo[161858]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:09 compute-0 sudo[162049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acmhttvlmzpvntclzkgbdkexquvipcbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852829.687542-1376-113509184502513/AnsiballZ_systemd.py'
Jan 31 09:47:09 compute-0 sudo[162049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:10 compute-0 python3.9[162051]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 09:47:10 compute-0 systemd[1]: Reloading.
Jan 31 09:47:10 compute-0 systemd-rc-local-generator[162078]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:47:10 compute-0 systemd-sysv-generator[162082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:47:10 compute-0 systemd[1]: Reloading.
Jan 31 09:47:10 compute-0 systemd-rc-local-generator[162113]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:47:10 compute-0 systemd-sysv-generator[162119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:47:10 compute-0 sudo[162049]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:11 compute-0 sshd-session[107428]: Connection closed by 192.168.122.30 port 46402
Jan 31 09:47:11 compute-0 sshd-session[107425]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:47:11 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 31 09:47:11 compute-0 systemd[1]: session-22.scope: Consumed 2min 51.596s CPU time.
Jan 31 09:47:11 compute-0 systemd-logind[795]: Session 22 logged out. Waiting for processes to exit.
Jan 31 09:47:11 compute-0 systemd-logind[795]: Removed session 22.
Jan 31 09:47:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:47:16.405 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:47:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:47:16.406 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:47:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:47:16.407 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:47:17 compute-0 sshd-session[162148]: Accepted publickey for zuul from 192.168.122.30 port 47542 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:47:17 compute-0 systemd-logind[795]: New session 23 of user zuul.
Jan 31 09:47:17 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 31 09:47:17 compute-0 sshd-session[162148]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:47:18 compute-0 python3.9[162301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:47:19 compute-0 podman[162429]: 2026-01-31 09:47:19.502353437 +0000 UTC m=+0.070660597 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 09:47:19 compute-0 python3.9[162466]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:47:19 compute-0 network[162491]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:47:19 compute-0 network[162492]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:47:19 compute-0 network[162493]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:47:22 compute-0 sudo[162762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpezglahgwfqrleksoxuxetccrurysfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852842.1380317-42-74816364074479/AnsiballZ_setup.py'
Jan 31 09:47:22 compute-0 sudo[162762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:22 compute-0 python3.9[162764]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:47:22 compute-0 sudo[162762]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:23 compute-0 sudo[162846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmwoombycfgtcnvhpzmbkhqttxclkxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852842.1380317-42-74816364074479/AnsiballZ_dnf.py'
Jan 31 09:47:23 compute-0 sudo[162846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:23 compute-0 python3.9[162848]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:47:26 compute-0 podman[162850]: 2026-01-31 09:47:26.972081526 +0000 UTC m=+0.093443767 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 09:47:28 compute-0 sudo[162846]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:29 compute-0 sudo[163025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naranuesmlponsqsdfkxqbkhpkhbpxyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852848.9367008-54-111667558242303/AnsiballZ_stat.py'
Jan 31 09:47:29 compute-0 sudo[163025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:29 compute-0 python3.9[163027]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:47:29 compute-0 sudo[163025]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:30 compute-0 sudo[163177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfhzyursjewdfcaalusuqxqbpqanpemb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852849.7856119-64-28853093828664/AnsiballZ_command.py'
Jan 31 09:47:30 compute-0 sudo[163177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:30 compute-0 python3.9[163179]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:30 compute-0 sudo[163177]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:30 compute-0 sudo[163330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eirfeiskfpakshxwizposmkgtttrwxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852850.6830275-74-55521179527918/AnsiballZ_stat.py'
Jan 31 09:47:30 compute-0 sudo[163330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:31 compute-0 python3.9[163332]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:47:31 compute-0 sudo[163330]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:31 compute-0 sudo[163482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtnclbljjjgyfyssxqmqlqxybdifgnyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852851.2406425-82-119465303392671/AnsiballZ_command.py'
Jan 31 09:47:31 compute-0 sudo[163482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:31 compute-0 python3.9[163484]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:31 compute-0 sudo[163482]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:32 compute-0 sudo[163635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrlmroheqmzbwlzjptlyxzjkxypgwijj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852851.8466637-90-57955941949342/AnsiballZ_stat.py'
Jan 31 09:47:32 compute-0 sudo[163635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:32 compute-0 python3.9[163637]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:32 compute-0 sudo[163635]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:32 compute-0 sudo[163758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjvovrlmejpuungrdyofxcwmsdvfzwao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852851.8466637-90-57955941949342/AnsiballZ_copy.py'
Jan 31 09:47:32 compute-0 sudo[163758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:33 compute-0 python3.9[163760]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852851.8466637-90-57955941949342/.source.iscsi _original_basename=.ugymspk3 follow=False checksum=97eacfeab10b2505a0d477f2e866ff3e3e97ba4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:33 compute-0 sudo[163758]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:33 compute-0 sudo[163910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirywaemkdwrmgumzilvqzouueejzjdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852853.363832-105-182890165317754/AnsiballZ_file.py'
Jan 31 09:47:33 compute-0 sudo[163910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:33 compute-0 python3.9[163912]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:33 compute-0 sudo[163910]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:34 compute-0 sudo[164062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okltfpfoydsuiqekteevptjkaezwnoyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852854.1313376-113-61404607116580/AnsiballZ_lineinfile.py'
Jan 31 09:47:34 compute-0 sudo[164062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:34 compute-0 python3.9[164064]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:34 compute-0 sudo[164062]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:35 compute-0 sudo[164214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fftzahchzgfoctcsgbksbyktvuaqhrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852854.981531-122-131522375508113/AnsiballZ_systemd_service.py'
Jan 31 09:47:35 compute-0 sudo[164214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:35 compute-0 python3.9[164216]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:47:35 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 09:47:35 compute-0 sudo[164214]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:36 compute-0 sudo[164370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzeyvbxidynfsppevvsnbkrhpjisqatr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852856.1041727-130-268442798837712/AnsiballZ_systemd_service.py'
Jan 31 09:47:36 compute-0 sudo[164370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:36 compute-0 python3.9[164372]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:47:36 compute-0 systemd[1]: Reloading.
Jan 31 09:47:36 compute-0 systemd-rc-local-generator[164402]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:47:36 compute-0 systemd-sysv-generator[164405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:47:36 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 09:47:37 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 09:47:37 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 09:47:37 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 09:47:37 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 09:47:37 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 09:47:37 compute-0 sudo[164370]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:38 compute-0 python3.9[164571]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:47:38 compute-0 network[164588]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:47:38 compute-0 network[164589]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:47:38 compute-0 network[164590]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:47:40 compute-0 sudo[164859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlpljpntwzvfdflauamtamosppwdujdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852860.6201482-153-143297067023180/AnsiballZ_dnf.py'
Jan 31 09:47:40 compute-0 sudo[164859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:41 compute-0 python3.9[164861]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:47:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:47:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:47:43 compute-0 systemd[1]: Reloading.
Jan 31 09:47:43 compute-0 systemd-rc-local-generator[164903]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:47:43 compute-0 systemd-sysv-generator[164908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:47:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:47:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:47:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:47:43 compute-0 systemd[1]: run-rae1c0041c06942cbb0161e0735cd7eb1.service: Deactivated successfully.
Jan 31 09:47:43 compute-0 sudo[164859]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:44 compute-0 sudo[165174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczkrwwjgsusjtkjqluozscatbimdwpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852863.917858-162-117267325090986/AnsiballZ_file.py'
Jan 31 09:47:44 compute-0 sudo[165174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:44 compute-0 python3.9[165176]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 09:47:44 compute-0 sudo[165174]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:45 compute-0 sudo[165326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbwwvdlcwzlzwcglpaxrvbjwvhertalt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852864.6999214-170-265913330936548/AnsiballZ_modprobe.py'
Jan 31 09:47:45 compute-0 sudo[165326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:45 compute-0 python3.9[165328]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 09:47:45 compute-0 sudo[165326]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:45 compute-0 sudo[165482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szoykgracnjwvfxkzhvwnufwsemmuklw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852865.4621575-178-3892800700813/AnsiballZ_stat.py'
Jan 31 09:47:45 compute-0 sudo[165482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:45 compute-0 python3.9[165484]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:45 compute-0 sudo[165482]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:46 compute-0 sudo[165605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobiemjqvncgrlvlxurkhockxwzjsfmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852865.4621575-178-3892800700813/AnsiballZ_copy.py'
Jan 31 09:47:46 compute-0 sudo[165605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:46 compute-0 python3.9[165607]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852865.4621575-178-3892800700813/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:46 compute-0 sudo[165605]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:47 compute-0 sudo[165757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpthishlxosyelyxsigaemrpmlclgvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852866.8086615-194-272741414716300/AnsiballZ_lineinfile.py'
Jan 31 09:47:47 compute-0 sudo[165757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:47 compute-0 python3.9[165759]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:47 compute-0 sudo[165757]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:47 compute-0 sudo[165909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiefseigkiusptlbdbnflihrqwwlguhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852867.391268-202-66263408807868/AnsiballZ_systemd.py'
Jan 31 09:47:47 compute-0 sudo[165909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:48 compute-0 python3.9[165911]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:47:48 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 09:47:48 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 09:47:48 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 09:47:48 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 09:47:48 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 09:47:48 compute-0 sudo[165909]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:48 compute-0 sudo[166065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdpoarmscyfspuaerfzgraslaqlvjcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852868.5522418-210-279681080190918/AnsiballZ_command.py'
Jan 31 09:47:48 compute-0 sudo[166065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:49 compute-0 python3.9[166067]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:49 compute-0 sudo[166065]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:49 compute-0 podman[166145]: 2026-01-31 09:47:49.943205256 +0000 UTC m=+0.060068330 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 09:47:50 compute-0 sudo[166237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpnkpiurqeigitthlokdzlrbjqdynyex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852869.7698379-220-72283950306334/AnsiballZ_stat.py'
Jan 31 09:47:50 compute-0 sudo[166237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:50 compute-0 python3.9[166239]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:47:50 compute-0 sudo[166237]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:50 compute-0 sudo[166389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhhvvulhggvdcicjvwdmkchrscqqewk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852870.4847982-229-248027711045188/AnsiballZ_stat.py'
Jan 31 09:47:50 compute-0 sudo[166389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:50 compute-0 python3.9[166391]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:47:50 compute-0 sudo[166389]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:51 compute-0 sudo[166512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmlcutbxzwteitmubrvbqcbfialyxaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852870.4847982-229-248027711045188/AnsiballZ_copy.py'
Jan 31 09:47:51 compute-0 sudo[166512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:51 compute-0 python3.9[166514]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852870.4847982-229-248027711045188/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:51 compute-0 sudo[166512]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:52 compute-0 sudo[166664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilwlbvfeaivllauccrqldqnovwyvrgpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852871.9220383-244-246454557366386/AnsiballZ_command.py'
Jan 31 09:47:52 compute-0 sudo[166664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:52 compute-0 python3.9[166666]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:52 compute-0 sudo[166664]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:52 compute-0 sudo[166817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnaftaywbwssxucwrvqngwgltvoeqbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852872.563132-252-110823255730628/AnsiballZ_lineinfile.py'
Jan 31 09:47:52 compute-0 sudo[166817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:53 compute-0 python3.9[166819]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:53 compute-0 sudo[166817]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:53 compute-0 sudo[166969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhzjgcmojculndbmfywclootqdjvokwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852873.2436569-260-189757093246784/AnsiballZ_replace.py'
Jan 31 09:47:53 compute-0 sudo[166969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:53 compute-0 python3.9[166971]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:53 compute-0 sudo[166969]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:54 compute-0 sudo[167121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxiavgefoaafjcgwyfafvqavwovzrjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852874.0797937-268-224434570042247/AnsiballZ_replace.py'
Jan 31 09:47:54 compute-0 sudo[167121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:54 compute-0 python3.9[167123]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:54 compute-0 sudo[167121]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:54 compute-0 sudo[167273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fluhsrunfrmprkqczvivcacvwkuokosh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852874.6895528-277-175883895469950/AnsiballZ_lineinfile.py'
Jan 31 09:47:54 compute-0 sudo[167273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:55 compute-0 python3.9[167275]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:55 compute-0 sudo[167273]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:55 compute-0 sudo[167425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiibsgkfilrksmocudzutwzxltoeizkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852875.2346606-277-11579683386969/AnsiballZ_lineinfile.py'
Jan 31 09:47:55 compute-0 sudo[167425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:55 compute-0 python3.9[167427]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:55 compute-0 sudo[167425]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:56 compute-0 sudo[167577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tveewbpuakrsvbtkkrxstyapdjkborlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852876.0506594-277-279460027465166/AnsiballZ_lineinfile.py'
Jan 31 09:47:56 compute-0 sudo[167577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:56 compute-0 python3.9[167579]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:56 compute-0 sudo[167577]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:56 compute-0 sudo[167729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvpwjrtdqbyybtgviapjedfjvtfdzyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852876.6670785-277-700469944747/AnsiballZ_lineinfile.py'
Jan 31 09:47:56 compute-0 sudo[167729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:57 compute-0 python3.9[167731]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:47:57 compute-0 sudo[167729]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:57 compute-0 sudo[167897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbzratdudbzbbrzwavpgmxyvfpwgkypf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852877.3393369-306-217516353162769/AnsiballZ_stat.py'
Jan 31 09:47:57 compute-0 sudo[167897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:57 compute-0 podman[167855]: 2026-01-31 09:47:57.700368174 +0000 UTC m=+0.136080745 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 09:47:57 compute-0 python3.9[167903]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:47:57 compute-0 sudo[167897]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:58 compute-0 sudo[168061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqppamrgoqidirvltoxkcwgajgkevwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852878.0166912-314-162651946497298/AnsiballZ_command.py'
Jan 31 09:47:58 compute-0 sudo[168061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:58 compute-0 python3.9[168063]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:47:58 compute-0 sudo[168061]: pam_unix(sudo:session): session closed for user root
Jan 31 09:47:58 compute-0 sudo[168214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsujtqkekvukfeetbnlghceodyeuzdhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852878.6871285-323-594025986261/AnsiballZ_systemd_service.py'
Jan 31 09:47:58 compute-0 sudo[168214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:47:59 compute-0 python3.9[168216]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:00 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 31 09:48:00 compute-0 sudo[168214]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:00 compute-0 sudo[168370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncufazokmjuxyelamdthxoaxjhbeytnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852880.5557463-331-63161903402133/AnsiballZ_systemd_service.py'
Jan 31 09:48:00 compute-0 sudo[168370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:01 compute-0 python3.9[168372]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:02 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 09:48:02 compute-0 udevadm[168377]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 09:48:02 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 09:48:02 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 09:48:02 compute-0 multipathd[168381]: --------start up--------
Jan 31 09:48:02 compute-0 multipathd[168381]: read /etc/multipath.conf
Jan 31 09:48:02 compute-0 multipathd[168381]: path checkers start up
Jan 31 09:48:02 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 09:48:02 compute-0 sudo[168370]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:02 compute-0 sudo[168538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfyhbpukrosovebcqucpzendobneuwyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852882.7187915-343-167126756557428/AnsiballZ_file.py'
Jan 31 09:48:02 compute-0 sudo[168538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:03 compute-0 python3.9[168540]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 09:48:03 compute-0 sudo[168538]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:03 compute-0 sudo[168690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnbhjxienwiigblotamtxhjjmoqarfox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852883.3112633-351-256926027693782/AnsiballZ_modprobe.py'
Jan 31 09:48:03 compute-0 sudo[168690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:03 compute-0 python3.9[168692]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 09:48:03 compute-0 kernel: Key type psk registered
Jan 31 09:48:03 compute-0 sudo[168690]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:04 compute-0 sudo[168854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwfqvrhhsqvkbqcxdgozppzqidmgiged ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852883.9986653-359-41687979238307/AnsiballZ_stat.py'
Jan 31 09:48:04 compute-0 sudo[168854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:04 compute-0 python3.9[168856]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:48:04 compute-0 sudo[168854]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:04 compute-0 sudo[168977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqgscunskkoydlxjeektkufqhupqrkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852883.9986653-359-41687979238307/AnsiballZ_copy.py'
Jan 31 09:48:04 compute-0 sudo[168977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:04 compute-0 python3.9[168979]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769852883.9986653-359-41687979238307/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:05 compute-0 sudo[168977]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:05 compute-0 sudo[169129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmvhsqdpxdqfudbdaxltspkslylovhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852885.276086-375-248887490774495/AnsiballZ_lineinfile.py'
Jan 31 09:48:05 compute-0 sudo[169129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:05 compute-0 python3.9[169131]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:05 compute-0 sudo[169129]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:06 compute-0 sudo[169281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavzdkucmerkuijrenetgzinxfzwxpdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852885.8245938-383-41155472465124/AnsiballZ_systemd.py'
Jan 31 09:48:06 compute-0 sudo[169281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:06 compute-0 python3.9[169283]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:48:06 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 09:48:06 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 09:48:06 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 09:48:06 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 09:48:06 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 09:48:06 compute-0 sudo[169281]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:06 compute-0 sudo[169438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeeamhocygpkrtmvvidegxkkogbkxygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852886.642584-391-79125469131438/AnsiballZ_dnf.py'
Jan 31 09:48:06 compute-0 sudo[169438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:07 compute-0 python3.9[169440]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:48:09 compute-0 systemd[1]: Reloading.
Jan 31 09:48:09 compute-0 systemd-rc-local-generator[169469]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:48:09 compute-0 systemd-sysv-generator[169473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:48:09 compute-0 systemd[1]: Reloading.
Jan 31 09:48:09 compute-0 systemd-rc-local-generator[169507]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:48:09 compute-0 systemd-sysv-generator[169510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:48:09 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 09:48:09 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 09:48:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 09:48:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 09:48:09 compute-0 systemd[1]: Reloading.
Jan 31 09:48:09 compute-0 systemd-rc-local-generator[169603]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:48:09 compute-0 systemd-sysv-generator[169607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:48:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 09:48:10 compute-0 sudo[169438]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:10 compute-0 sudo[170901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjyzlbxjpxuxmryjhrumlpubtqddgjdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852890.6911225-399-261115358274790/AnsiballZ_systemd_service.py'
Jan 31 09:48:10 compute-0 sudo[170901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 09:48:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 09:48:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.072s CPU time.
Jan 31 09:48:11 compute-0 systemd[1]: run-rd062c2eb9b94408d96d0b9f49f9719b5.service: Deactivated successfully.
Jan 31 09:48:11 compute-0 python3.9[170903]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:48:11 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 31 09:48:11 compute-0 iscsid[164412]: iscsid shutting down.
Jan 31 09:48:11 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 09:48:11 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 31 09:48:11 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 09:48:11 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 09:48:11 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 09:48:11 compute-0 sudo[170901]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:11 compute-0 sudo[171058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkjqttjriltdlzqwnbyxxfozxxuqqaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852891.4752033-407-38030828984446/AnsiballZ_systemd_service.py'
Jan 31 09:48:11 compute-0 sudo[171058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:11 compute-0 python3.9[171060]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:48:12 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 09:48:12 compute-0 multipathd[168381]: exit (signal)
Jan 31 09:48:12 compute-0 multipathd[168381]: --------shut down-------
Jan 31 09:48:12 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 09:48:12 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 09:48:12 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 09:48:12 compute-0 multipathd[171066]: --------start up--------
Jan 31 09:48:12 compute-0 multipathd[171066]: read /etc/multipath.conf
Jan 31 09:48:12 compute-0 multipathd[171066]: path checkers start up
Jan 31 09:48:12 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 09:48:12 compute-0 sudo[171058]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:12 compute-0 python3.9[171223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:48:13 compute-0 sudo[171377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wspnwowjqgzqaqwgfizxjppkkwlomspn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852893.2208235-425-278520157045669/AnsiballZ_file.py'
Jan 31 09:48:13 compute-0 sudo[171377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:13 compute-0 python3.9[171379]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:13 compute-0 sudo[171377]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:14 compute-0 sudo[171529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbnrrxkyjciwmqrscezbgehuievdoood ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852894.2729478-436-109666831105868/AnsiballZ_systemd_service.py'
Jan 31 09:48:14 compute-0 sudo[171529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:14 compute-0 python3.9[171531]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:48:14 compute-0 systemd[1]: Reloading.
Jan 31 09:48:14 compute-0 systemd-rc-local-generator[171554]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:48:14 compute-0 systemd-sysv-generator[171559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:48:15 compute-0 sudo[171529]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:15 compute-0 python3.9[171715]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:48:15 compute-0 network[171732]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:48:15 compute-0 network[171733]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:48:15 compute-0 network[171734]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:48:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:48:16.407 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:48:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:48:16.408 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:48:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:48:16.408 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:48:18 compute-0 sudo[172004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpahdeuxstrnfgwvvlqgfxwjbmcivwem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852898.4971104-455-142315763911182/AnsiballZ_systemd_service.py'
Jan 31 09:48:18 compute-0 sudo[172004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:19 compute-0 python3.9[172006]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:19 compute-0 sudo[172004]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:19 compute-0 sudo[172157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmdknhaqujzsfrwvohngdanzobficdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852899.2323492-455-281295284264289/AnsiballZ_systemd_service.py'
Jan 31 09:48:19 compute-0 sudo[172157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:19 compute-0 python3.9[172159]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:19 compute-0 sudo[172157]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:20 compute-0 sudo[172321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmptpqbhvuxpzcajpvmaslfbjwejbcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852899.9634416-455-193386500765122/AnsiballZ_systemd_service.py'
Jan 31 09:48:20 compute-0 sudo[172321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:20 compute-0 podman[172284]: 2026-01-31 09:48:20.263435371 +0000 UTC m=+0.064578239 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:48:20 compute-0 python3.9[172327]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:20 compute-0 sudo[172321]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:20 compute-0 sudo[172481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrbdeubuymhbudzlmyzxqcpxdxjqxzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852900.6903622-455-10202616469992/AnsiballZ_systemd_service.py'
Jan 31 09:48:20 compute-0 sudo[172481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:21 compute-0 python3.9[172483]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:21 compute-0 sudo[172481]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:21 compute-0 sudo[172634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfkiolsqhvrezyxnfwppjtbenrxmpri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852901.3893194-455-264920503643860/AnsiballZ_systemd_service.py'
Jan 31 09:48:21 compute-0 sudo[172634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:21 compute-0 python3.9[172636]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:22 compute-0 sudo[172634]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:22 compute-0 sudo[172787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgfevckbyzqdwoilaedhsgnufqutgary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852902.13599-455-46762684710218/AnsiballZ_systemd_service.py'
Jan 31 09:48:22 compute-0 sudo[172787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:22 compute-0 python3.9[172789]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:22 compute-0 sudo[172787]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:23 compute-0 sudo[172940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizymunggflxxzxoheygzaltkycgzhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852902.893963-455-8816001725874/AnsiballZ_systemd_service.py'
Jan 31 09:48:23 compute-0 sudo[172940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:23 compute-0 python3.9[172942]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:23 compute-0 sudo[172940]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:23 compute-0 sudo[173093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iieepkmlnvvxwseqhndmowyeuxwvfyqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852903.6045814-455-89630907097641/AnsiballZ_systemd_service.py'
Jan 31 09:48:23 compute-0 sudo[173093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:24 compute-0 python3.9[173095]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:48:24 compute-0 sudo[173093]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:24 compute-0 sudo[173246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipeqanprjlhuynnhvtrpnmjhvbuubhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852904.5428016-514-156323192852917/AnsiballZ_file.py'
Jan 31 09:48:24 compute-0 sudo[173246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:24 compute-0 python3.9[173248]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:24 compute-0 sudo[173246]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:25 compute-0 sudo[173398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kutdpssphxijdpywrmnlwbluiffqohjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852905.0756755-514-204249958994026/AnsiballZ_file.py'
Jan 31 09:48:25 compute-0 sudo[173398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:25 compute-0 python3.9[173400]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:25 compute-0 sudo[173398]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:25 compute-0 sudo[173550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlsirstaiqicozjdtdtfqdnqhagjezns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852905.6606665-514-177561408312470/AnsiballZ_file.py'
Jan 31 09:48:25 compute-0 sudo[173550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:26 compute-0 python3.9[173552]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:26 compute-0 sudo[173550]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:26 compute-0 sudo[173702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdfelqgoggkdgcvjajfmdgdjmbaiadhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852906.3278077-514-173778327529435/AnsiballZ_file.py'
Jan 31 09:48:26 compute-0 sudo[173702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:26 compute-0 python3.9[173704]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:26 compute-0 sudo[173702]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:27 compute-0 sudo[173854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqidswsjeyxnvphdmkzwflutjapsgquq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852906.8675992-514-205062931328040/AnsiballZ_file.py'
Jan 31 09:48:27 compute-0 sudo[173854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:27 compute-0 python3.9[173856]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:27 compute-0 sudo[173854]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:27 compute-0 podman[173925]: 2026-01-31 09:48:27.941794951 +0000 UTC m=+0.066770311 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 09:48:28 compute-0 sudo[174031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwjrduketgfjxwiuwswtgolxgrslonwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852907.810123-514-52952717864543/AnsiballZ_file.py'
Jan 31 09:48:28 compute-0 sudo[174031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:28 compute-0 python3.9[174033]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:28 compute-0 sudo[174031]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:28 compute-0 sudo[174183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyliplntzfqpvqzojdiasqoqbxtfvkcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852908.4558818-514-215836044664824/AnsiballZ_file.py'
Jan 31 09:48:28 compute-0 sudo[174183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:28 compute-0 python3.9[174185]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:28 compute-0 sudo[174183]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:29 compute-0 sudo[174335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzzytrenykwyzdqdhzwayqgblgekccta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852908.9975553-514-195109715502086/AnsiballZ_file.py'
Jan 31 09:48:29 compute-0 sudo[174335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:29 compute-0 python3.9[174337]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:29 compute-0 sudo[174335]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:29 compute-0 sudo[174487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjeljcuiocnavyrvufhzkgvvyddhrufk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852909.6505606-571-33015749260795/AnsiballZ_file.py'
Jan 31 09:48:29 compute-0 sudo[174487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:30 compute-0 python3.9[174489]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:30 compute-0 sudo[174487]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:30 compute-0 sudo[174639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eudhmgasacxbrkipdzjyhuumgxudsznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852910.2560802-571-267012865680074/AnsiballZ_file.py'
Jan 31 09:48:30 compute-0 sudo[174639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:30 compute-0 python3.9[174641]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:30 compute-0 sudo[174639]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:31 compute-0 sudo[174791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpvjovlyqxynrdtzuuomzadehhjrhczz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852910.854317-571-244857115072536/AnsiballZ_file.py'
Jan 31 09:48:31 compute-0 sudo[174791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:31 compute-0 python3.9[174793]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:31 compute-0 sudo[174791]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:31 compute-0 sudo[174943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxlvexgxndhbwxdxnaxllczkeidsbyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852911.4468524-571-277453703564057/AnsiballZ_file.py'
Jan 31 09:48:31 compute-0 sudo[174943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:31 compute-0 python3.9[174945]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:31 compute-0 sudo[174943]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:32 compute-0 sudo[175095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwgddxlhtisqpbzfermfdukuqhnwgboi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852912.2013047-571-90826315205279/AnsiballZ_file.py'
Jan 31 09:48:32 compute-0 sudo[175095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:32 compute-0 python3.9[175097]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:32 compute-0 sudo[175095]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:33 compute-0 sudo[175247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyekeftofsyyngpdxmquafomjqpmcixd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852912.8055398-571-38745126629933/AnsiballZ_file.py'
Jan 31 09:48:33 compute-0 sudo[175247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:33 compute-0 python3.9[175249]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:33 compute-0 sudo[175247]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:33 compute-0 sudo[175399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvpfviznzuwsrdjnjyxjgjvzongjxfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852913.3805351-571-10564437885631/AnsiballZ_file.py'
Jan 31 09:48:33 compute-0 sudo[175399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:33 compute-0 python3.9[175401]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:33 compute-0 sudo[175399]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:34 compute-0 sudo[175551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuouxcfbdhizbgrkynfrruxqzydwzcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852914.004214-571-13742746938999/AnsiballZ_file.py'
Jan 31 09:48:34 compute-0 sudo[175551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:34 compute-0 python3.9[175553]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:48:34 compute-0 sudo[175551]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:34 compute-0 sudo[175703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvbjtulmavrtvljywlgwafjqvspbsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852914.6547873-629-61301156817910/AnsiballZ_command.py'
Jan 31 09:48:34 compute-0 sudo[175703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:35 compute-0 python3.9[175705]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:35 compute-0 sudo[175703]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:35 compute-0 python3.9[175857]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:48:36 compute-0 sudo[176007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snqwwfbthxgpnsdkfcbzzbhliwsmekam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852916.1354582-647-151872062935287/AnsiballZ_systemd_service.py'
Jan 31 09:48:36 compute-0 sudo[176007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:36 compute-0 python3.9[176009]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:48:36 compute-0 systemd[1]: Reloading.
Jan 31 09:48:36 compute-0 systemd-rc-local-generator[176037]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:48:36 compute-0 systemd-sysv-generator[176040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:48:37 compute-0 sudo[176007]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:37 compute-0 sudo[176194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyzludzlvaafceejgrufujlysfvxeteg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852917.2658076-655-225769748026350/AnsiballZ_command.py'
Jan 31 09:48:37 compute-0 sudo[176194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:37 compute-0 python3.9[176196]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:37 compute-0 sudo[176194]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:38 compute-0 sudo[176347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynflsxqyotdsndchgizrdrgszevnykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852917.9350102-655-212875572842557/AnsiballZ_command.py'
Jan 31 09:48:38 compute-0 sudo[176347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:38 compute-0 python3.9[176349]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:38 compute-0 sudo[176347]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:38 compute-0 sudo[176500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elkqdhyykupxaomxjqggadrrxftnghlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852918.53589-655-220332652579265/AnsiballZ_command.py'
Jan 31 09:48:38 compute-0 sudo[176500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:38 compute-0 python3.9[176502]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:38 compute-0 sudo[176500]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:39 compute-0 sudo[176653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouxxchwvfroijeywvwydwfqiqkzmyytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852919.0599327-655-102278211241787/AnsiballZ_command.py'
Jan 31 09:48:39 compute-0 sudo[176653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:39 compute-0 python3.9[176655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:39 compute-0 sudo[176653]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:39 compute-0 sudo[176806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prbaellcwlckkbasvdubdhitjcyaxmfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852919.697684-655-260567820789799/AnsiballZ_command.py'
Jan 31 09:48:39 compute-0 sudo[176806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:40 compute-0 python3.9[176808]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:40 compute-0 sudo[176806]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:40 compute-0 sudo[176959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbcercgotdubmzndfvltxxdfvepvrjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852920.3609774-655-225169488963637/AnsiballZ_command.py'
Jan 31 09:48:40 compute-0 sudo[176959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:40 compute-0 python3.9[176961]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:40 compute-0 sudo[176959]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:41 compute-0 sudo[177112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsuqwkzoatctjgwdnsjnfxzxrinofuvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852921.1930985-655-102474899468023/AnsiballZ_command.py'
Jan 31 09:48:41 compute-0 sudo[177112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:41 compute-0 python3.9[177114]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:41 compute-0 sudo[177112]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:41 compute-0 sudo[177265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rornfwfxvatxflxxmbqclwkniiqwbtci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852921.7369587-655-45764774889239/AnsiballZ_command.py'
Jan 31 09:48:41 compute-0 sudo[177265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:42 compute-0 python3.9[177267]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:48:42 compute-0 sudo[177265]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:43 compute-0 sudo[177418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpkuvpeipbhokvlitglygkbxjncmeqoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852923.2593775-734-167829038445863/AnsiballZ_file.py'
Jan 31 09:48:43 compute-0 sudo[177418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:43 compute-0 python3.9[177420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:43 compute-0 sudo[177418]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:44 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 09:48:44 compute-0 sudo[177570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejkcvnzetuwdikubjniptmqmbjnufjsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852923.8173091-734-19076088158665/AnsiballZ_file.py'
Jan 31 09:48:44 compute-0 sudo[177570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:44 compute-0 python3.9[177573]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:44 compute-0 sudo[177570]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:44 compute-0 sudo[177723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtsnwfcfzwjespezzyqbtbjjpfzinaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852924.4597754-734-259674825146027/AnsiballZ_file.py'
Jan 31 09:48:44 compute-0 sudo[177723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:44 compute-0 python3.9[177725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:44 compute-0 sudo[177723]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:45 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 09:48:45 compute-0 sudo[177876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjpwdcuaorqvsokpbncuffuihvtdusaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852925.2650049-756-74261867330818/AnsiballZ_file.py'
Jan 31 09:48:45 compute-0 sudo[177876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:45 compute-0 python3.9[177878]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:45 compute-0 sudo[177876]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:46 compute-0 sudo[178028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amxpanwjlhbecqhkmsauscvdztpntmvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852925.8474548-756-233802914005856/AnsiballZ_file.py'
Jan 31 09:48:46 compute-0 sudo[178028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:46 compute-0 python3.9[178030]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:46 compute-0 sudo[178028]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:46 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 09:48:46 compute-0 sudo[178181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyulxtxpkhvnfpgqannnzaasotoygfft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852926.4271498-756-93866401551512/AnsiballZ_file.py'
Jan 31 09:48:46 compute-0 sudo[178181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:46 compute-0 python3.9[178183]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:46 compute-0 sudo[178181]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:47 compute-0 sudo[178333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmkrpewhhjfcbohdblfmtyyipniaolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852927.0542052-756-42810303749876/AnsiballZ_file.py'
Jan 31 09:48:47 compute-0 sudo[178333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:47 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 09:48:47 compute-0 python3.9[178335]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:47 compute-0 sudo[178333]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:48 compute-0 sudo[178486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhgmcxnuxwqebcytajanobxhkruxrdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852927.9839149-756-208840893582157/AnsiballZ_file.py'
Jan 31 09:48:48 compute-0 sudo[178486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:48 compute-0 python3.9[178488]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:48 compute-0 sudo[178486]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:48 compute-0 sudo[178638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvpihgtqwqyfsjggzqhdmmgaoxyleki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852928.5776703-756-197885945955414/AnsiballZ_file.py'
Jan 31 09:48:48 compute-0 sudo[178638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:49 compute-0 python3.9[178640]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:49 compute-0 sudo[178638]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:49 compute-0 sudo[178790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjsogilogmstuufmklqedcxsyrjzbcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852929.4053314-756-103006787655138/AnsiballZ_file.py'
Jan 31 09:48:49 compute-0 sudo[178790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:49 compute-0 python3.9[178792]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:49 compute-0 sudo[178790]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:50 compute-0 podman[178817]: 2026-01-31 09:48:50.980464978 +0000 UTC m=+0.102126019 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 09:48:54 compute-0 sudo[178962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccnnyyjnwunncqbzkkmthixqxojgncu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852934.3045764-925-158653454632486/AnsiballZ_getent.py'
Jan 31 09:48:54 compute-0 sudo[178962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:54 compute-0 python3.9[178964]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 09:48:54 compute-0 sudo[178962]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:55 compute-0 sudo[179115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vioxqrjpngchuadpbxpzuuzccuqbzjmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852935.0360553-933-140962905158426/AnsiballZ_group.py'
Jan 31 09:48:55 compute-0 sudo[179115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:55 compute-0 python3.9[179117]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 09:48:55 compute-0 groupadd[179118]: group added to /etc/group: name=nova, GID=42436
Jan 31 09:48:55 compute-0 groupadd[179118]: group added to /etc/gshadow: name=nova
Jan 31 09:48:55 compute-0 groupadd[179118]: new group: name=nova, GID=42436
Jan 31 09:48:55 compute-0 sudo[179115]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:56 compute-0 sudo[179273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuewpmsdqrompjtbukekmwhrpdbnckde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852935.9163806-941-262116634190639/AnsiballZ_user.py'
Jan 31 09:48:56 compute-0 sudo[179273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:48:56 compute-0 python3.9[179275]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 09:48:56 compute-0 useradd[179277]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 09:48:56 compute-0 useradd[179277]: add 'nova' to group 'libvirt'
Jan 31 09:48:56 compute-0 useradd[179277]: add 'nova' to shadow group 'libvirt'
Jan 31 09:48:56 compute-0 sudo[179273]: pam_unix(sudo:session): session closed for user root
Jan 31 09:48:57 compute-0 sshd-session[179308]: Accepted publickey for zuul from 192.168.122.30 port 52842 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:48:57 compute-0 systemd-logind[795]: New session 24 of user zuul.
Jan 31 09:48:57 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 31 09:48:57 compute-0 sshd-session[179308]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:48:57 compute-0 sshd-session[179311]: Received disconnect from 192.168.122.30 port 52842:11: disconnected by user
Jan 31 09:48:57 compute-0 sshd-session[179311]: Disconnected from user zuul 192.168.122.30 port 52842
Jan 31 09:48:57 compute-0 sshd-session[179308]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:48:57 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 31 09:48:57 compute-0 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Jan 31 09:48:57 compute-0 systemd-logind[795]: Removed session 24.
Jan 31 09:48:58 compute-0 podman[179336]: 2026-01-31 09:48:58.096106129 +0000 UTC m=+0.120487257 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 31 09:48:58 compute-0 python3.9[179488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:48:59 compute-0 python3.9[179609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852938.1159358-966-183719445550800/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:48:59 compute-0 python3.9[179759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:00 compute-0 python3.9[179835]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:00 compute-0 python3.9[179985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:01 compute-0 python3.9[180106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852940.36604-966-82680491155583/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:02 compute-0 python3.9[180256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:02 compute-0 python3.9[180377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852941.594204-966-238821824765444/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:03 compute-0 python3.9[180527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:03 compute-0 python3.9[180648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852942.6994345-966-93436088928046/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:04 compute-0 python3.9[180798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:04 compute-0 python3.9[180919]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852943.7855077-966-160648252094140/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:05 compute-0 sudo[181069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjwuwovcohbojxriogvryxayhwrjvamg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852944.9039922-1049-252125021334630/AnsiballZ_file.py'
Jan 31 09:49:05 compute-0 sudo[181069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:05 compute-0 python3.9[181071]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:05 compute-0 sudo[181069]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:05 compute-0 sudo[181221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwdhaktqjmmiuscqilizsopjyvqgaoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852945.524469-1057-143473196425262/AnsiballZ_copy.py'
Jan 31 09:49:05 compute-0 sudo[181221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:06 compute-0 python3.9[181223]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:06 compute-0 sudo[181221]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:06 compute-0 sudo[181373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kipubuzvakmzrceqqbtufgnnsmrmnanq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852946.165651-1065-25663787533661/AnsiballZ_stat.py'
Jan 31 09:49:06 compute-0 sudo[181373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:06 compute-0 python3.9[181375]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:06 compute-0 sudo[181373]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:06 compute-0 sudo[181525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdhtqiqyrnbztrsrhusptoghkjgjrdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852946.7580602-1073-228830039282421/AnsiballZ_stat.py'
Jan 31 09:49:07 compute-0 sudo[181525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:07 compute-0 python3.9[181527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:07 compute-0 sudo[181525]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:07 compute-0 sudo[181648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyshyqbbuvvzrfexcqovxehwlgywcuar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852946.7580602-1073-228830039282421/AnsiballZ_copy.py'
Jan 31 09:49:07 compute-0 sudo[181648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:07 compute-0 python3.9[181650]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769852946.7580602-1073-228830039282421/.source _original_basename=.hnfe4ehb follow=False checksum=1134fbf462a810d948a134f5bbfc57f8859c6a50 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 09:49:07 compute-0 sudo[181648]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:08 compute-0 python3.9[181802]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:09 compute-0 python3.9[181954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:09 compute-0 python3.9[182075]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852948.6078274-1099-99265947287441/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:10 compute-0 python3.9[182225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:10 compute-0 python3.9[182346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769852949.6895933-1114-99267524059709/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:11 compute-0 sudo[182496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eblubtzabdrhmfhsfjcleikvyrkjddct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852950.9405844-1131-158040518349042/AnsiballZ_container_config_data.py'
Jan 31 09:49:11 compute-0 sudo[182496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:11 compute-0 python3.9[182498]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 09:49:11 compute-0 sudo[182496]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:12 compute-0 sudo[182648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxnbydzkdhjykombsjcrpzslwbbxdjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852951.9533954-1142-196515975892613/AnsiballZ_container_config_hash.py'
Jan 31 09:49:12 compute-0 sudo[182648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:12 compute-0 python3.9[182650]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:49:12 compute-0 sudo[182648]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:13 compute-0 sudo[182800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzryhjhjbijoktajyjboimxodqjqoxrl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852953.0495386-1152-41032708609371/AnsiballZ_edpm_container_manage.py'
Jan 31 09:49:13 compute-0 sudo[182800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:13 compute-0 python3[182802]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:49:14 compute-0 podman[182839]: 2026-01-31 09:49:14.060154232 +0000 UTC m=+0.031356246 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 09:49:14 compute-0 podman[182839]: 2026-01-31 09:49:14.234541761 +0000 UTC m=+0.205743695 container create e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:49:14 compute-0 python3[182802]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 09:49:14 compute-0 sudo[182800]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:14 compute-0 sudo[183025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omwlanbkhzwmbhkhuvdqaiuohvqgvgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852954.523068-1160-177405472636896/AnsiballZ_stat.py'
Jan 31 09:49:14 compute-0 sudo[183025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:14 compute-0 python3.9[183027]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:14 compute-0 sudo[183025]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:15 compute-0 sudo[183179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcxrqgfnxeavxlucasxmttpdnxsouqml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852955.4190989-1172-37553765033/AnsiballZ_container_config_data.py'
Jan 31 09:49:15 compute-0 sudo[183179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:15 compute-0 python3.9[183181]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 09:49:15 compute-0 sudo[183179]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:16 compute-0 sudo[183331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntfxnejwrdcurcmblfrhkquzlahtdqqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852956.1713471-1183-83598930180560/AnsiballZ_container_config_hash.py'
Jan 31 09:49:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:49:16.409 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:49:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:49:16.411 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:49:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:49:16.411 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:49:16 compute-0 sudo[183331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:16 compute-0 python3.9[183333]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:49:16 compute-0 sudo[183331]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:17 compute-0 sudo[183483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqmnqtrrlmqvlvlrkctgtrdsbkdpwjjo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769852956.8923788-1193-54090002280742/AnsiballZ_edpm_container_manage.py'
Jan 31 09:49:17 compute-0 sudo[183483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:17 compute-0 python3[183485]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:49:17 compute-0 podman[183520]: 2026-01-31 09:49:17.584427316 +0000 UTC m=+0.078409533 container create cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20260127, container_name=nova_compute)
Jan 31 09:49:17 compute-0 podman[183520]: 2026-01-31 09:49:17.530589153 +0000 UTC m=+0.024571410 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 09:49:17 compute-0 python3[183485]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 09:49:17 compute-0 sudo[183483]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:18 compute-0 sudo[183709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koinuunasgusynovjvkbelwofkxezqzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852957.8723845-1201-61002893134552/AnsiballZ_stat.py'
Jan 31 09:49:18 compute-0 sudo[183709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:18 compute-0 python3.9[183711]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:18 compute-0 sudo[183709]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:18 compute-0 sudo[183863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgxorzjpmqtkrecnnrlzqfovnzvgfjfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852958.5887911-1210-115680176731776/AnsiballZ_file.py'
Jan 31 09:49:18 compute-0 sudo[183863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:19 compute-0 python3.9[183865]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:19 compute-0 sudo[183863]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:19 compute-0 sudo[184014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmdpntgfyhzpfghvcoemhgoactqxwac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852959.2104843-1210-172962974166055/AnsiballZ_copy.py'
Jan 31 09:49:19 compute-0 sudo[184014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:19 compute-0 python3.9[184016]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769852959.2104843-1210-172962974166055/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:19 compute-0 sudo[184014]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:20 compute-0 sudo[184090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lauopzwisaledkwiolnbjsgtgnmgatzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852959.2104843-1210-172962974166055/AnsiballZ_systemd.py'
Jan 31 09:49:20 compute-0 sudo[184090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:20 compute-0 python3.9[184092]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:49:20 compute-0 systemd[1]: Reloading.
Jan 31 09:49:20 compute-0 systemd-rc-local-generator[184113]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:49:20 compute-0 systemd-sysv-generator[184116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:49:20 compute-0 sudo[184090]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:20 compute-0 sudo[184201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phcquuutxfxcwelgmkhamnaxckneszii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852959.2104843-1210-172962974166055/AnsiballZ_systemd.py'
Jan 31 09:49:20 compute-0 sudo[184201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:21 compute-0 python3.9[184203]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:49:21 compute-0 systemd[1]: Reloading.
Jan 31 09:49:21 compute-0 podman[184205]: 2026-01-31 09:49:21.214006428 +0000 UTC m=+0.052048653 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:49:21 compute-0 systemd-rc-local-generator[184242]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:49:21 compute-0 systemd-sysv-generator[184251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:49:21 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 09:49:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:21 compute-0 podman[184262]: 2026-01-31 09:49:21.552182601 +0000 UTC m=+0.102031104 container init cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute)
Jan 31 09:49:21 compute-0 podman[184262]: 2026-01-31 09:49:21.55837886 +0000 UTC m=+0.108227333 container start cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Jan 31 09:49:21 compute-0 podman[184262]: nova_compute
Jan 31 09:49:21 compute-0 systemd[1]: Started nova_compute container.
Jan 31 09:49:21 compute-0 nova_compute[184277]: + sudo -E kolla_set_configs
Jan 31 09:49:21 compute-0 sudo[184201]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Validating config file
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying service configuration files
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Deleting /etc/ceph
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Creating directory /etc/ceph
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Writing out command to execute
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:21 compute-0 nova_compute[184277]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 09:49:21 compute-0 nova_compute[184277]: ++ cat /run_command
Jan 31 09:49:21 compute-0 nova_compute[184277]: + CMD=nova-compute
Jan 31 09:49:21 compute-0 nova_compute[184277]: + ARGS=
Jan 31 09:49:21 compute-0 nova_compute[184277]: + sudo kolla_copy_cacerts
Jan 31 09:49:21 compute-0 nova_compute[184277]: + [[ ! -n '' ]]
Jan 31 09:49:21 compute-0 nova_compute[184277]: + . kolla_extend_start
Jan 31 09:49:21 compute-0 nova_compute[184277]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 09:49:21 compute-0 nova_compute[184277]: Running command: 'nova-compute'
Jan 31 09:49:21 compute-0 nova_compute[184277]: + umask 0022
Jan 31 09:49:21 compute-0 nova_compute[184277]: + exec nova-compute
Jan 31 09:49:22 compute-0 python3.9[184439]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:23 compute-0 python3.9[184589]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.452 184281 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.453 184281 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.454 184281 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.454 184281 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.674 184281 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.685 184281 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:49:23 compute-0 nova_compute[184277]: 2026-01-31 09:49:23.685 184281 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 09:49:23 compute-0 python3.9[184743]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.417 184281 INFO nova.virt.driver [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.531 184281 INFO nova.compute.provider_config [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.550 184281 DEBUG oslo_concurrency.lockutils [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.551 184281 DEBUG oslo_concurrency.lockutils [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.551 184281 DEBUG oslo_concurrency.lockutils [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.552 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.553 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.554 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.555 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.556 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.557 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.557 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.557 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.557 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.557 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.557 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.558 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.559 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.560 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.561 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.562 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.563 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.564 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.564 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.564 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.564 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.564 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.564 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.565 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.566 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.567 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.568 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.569 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.570 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.571 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.572 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.573 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.574 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.575 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.576 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.577 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.578 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.578 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.578 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.578 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.578 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.578 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.579 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.579 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.579 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.580 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.581 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.582 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.582 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.582 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.582 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.582 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.582 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.583 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.583 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.583 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.583 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.583 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.583 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.584 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.585 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.586 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.586 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.586 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.586 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.586 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.586 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.587 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.588 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.589 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.590 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.591 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.592 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.593 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.594 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.594 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.594 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.594 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.594 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.594 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.595 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.596 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.597 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.598 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.598 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.598 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.598 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.598 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.598 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.599 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.599 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.599 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.599 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.599 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.599 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.600 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.601 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.602 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.602 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.602 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.602 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.602 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.602 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.603 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.603 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.603 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.603 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.603 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.604 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.604 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.604 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.605 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.605 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.605 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.605 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.605 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.605 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.606 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.606 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.606 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.606 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.606 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.607 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.607 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.607 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.607 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.607 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.608 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.608 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.608 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.608 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.608 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.609 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.609 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.609 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.609 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.609 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.610 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.610 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.610 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.610 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.610 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.610 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.611 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.611 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.611 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.611 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.611 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.612 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.612 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.612 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.612 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.612 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.613 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.613 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.613 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.613 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.613 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.613 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.614 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.615 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.616 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.617 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.618 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.619 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.620 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.621 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.621 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.621 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.621 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.621 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.621 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.622 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.622 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.622 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.622 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.622 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.622 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.623 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.624 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.624 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.624 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.624 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.624 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.625 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.626 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.626 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.626 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.626 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.626 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.627 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.627 184281 WARNING oslo_config.cfg [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 09:49:24 compute-0 nova_compute[184277]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 09:49:24 compute-0 nova_compute[184277]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 09:49:24 compute-0 nova_compute[184277]: and ``live_migration_inbound_addr`` respectively.
Jan 31 09:49:24 compute-0 nova_compute[184277]: ).  Its value may be silently ignored in the future.
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.627 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.627 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.627 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.628 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.628 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.628 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.628 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.628 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.628 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.629 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.629 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.629 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.629 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.629 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.630 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.631 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.631 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.631 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.631 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.631 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.632 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.632 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.632 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.632 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.632 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.633 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.633 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.633 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.633 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.633 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.633 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.634 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.635 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.635 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.635 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.635 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.635 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.635 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.636 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.637 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.638 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.638 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.638 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.638 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.638 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.639 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.640 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.641 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.641 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.641 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.641 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.641 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.641 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.642 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.642 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.642 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.642 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.642 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.643 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.643 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.643 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.643 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.643 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.643 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.644 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.644 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.644 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.644 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.644 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.645 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.645 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.645 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.645 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.645 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.646 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.646 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.646 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.646 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.646 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.646 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.647 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.647 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.647 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.647 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.647 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.647 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.648 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.648 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.648 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.648 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.648 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.648 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.649 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.649 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.649 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.649 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.649 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.649 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.650 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.650 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.650 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.650 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.650 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.651 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.651 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.651 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.651 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.651 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.651 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.652 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.652 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.652 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.652 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.652 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.652 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.653 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.654 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.654 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.654 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.654 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.654 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.654 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.655 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.655 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.655 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.655 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.655 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.655 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.656 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.656 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.656 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.656 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.656 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.656 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.657 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.657 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.657 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.657 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.657 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.658 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.658 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.658 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.658 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.658 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.658 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.659 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.659 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.659 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.659 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.659 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.659 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.660 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.660 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.660 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.660 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.660 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.661 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.662 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.663 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.663 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.663 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.663 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.663 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.663 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.664 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.664 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.664 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.664 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.664 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.665 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.665 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.665 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.665 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.665 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.665 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.666 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.666 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.666 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.666 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.666 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.667 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.667 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.668 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.669 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.669 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.669 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.669 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.669 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.670 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.670 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.670 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.670 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.670 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.670 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.671 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.672 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.673 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.674 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.674 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.674 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.674 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.674 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.674 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.675 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.675 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.675 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.675 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.675 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.675 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.676 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.676 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.676 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.676 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.676 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.677 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.677 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.677 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.677 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.677 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.678 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.678 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.678 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.678 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.678 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.678 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.679 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.679 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.679 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.679 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.679 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.679 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.680 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.681 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.681 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.681 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.681 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.681 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.682 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.682 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.682 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.682 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.682 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.682 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.683 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.684 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.685 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.686 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.687 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.688 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.689 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.690 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.691 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.692 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.693 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.694 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.695 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.696 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.696 184281 DEBUG oslo_service.service [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.697 184281 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 09:49:24 compute-0 sudo[184893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qapuiumwgqdkqbwkflorykupraimquwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852964.1757858-1270-14934599404171/AnsiballZ_podman_container.py'
Jan 31 09:49:24 compute-0 sudo[184893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.714 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.716 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.717 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.717 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 09:49:24 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 09:49:24 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.790 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fba14ea4160> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.794 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fba14ea4160> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.795 184281 INFO nova.virt.libvirt.driver [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Connection event '1' reason 'None'
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.817 184281 WARNING nova.virt.libvirt.driver [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 09:49:24 compute-0 nova_compute[184277]: 2026-01-31 09:49:24.818 184281 DEBUG nova.virt.libvirt.volume.mount [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 09:49:24 compute-0 python3.9[184895]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 09:49:25 compute-0 sudo[184893]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:25 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:49:25 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:49:25 compute-0 sudo[185127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxlowktwgojxiilwtuoxidaxkooocdvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852965.305074-1278-176237136628538/AnsiballZ_systemd.py'
Jan 31 09:49:25 compute-0 sudo[185127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.648 184281 INFO nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]: 
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <host>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <uuid>9990fbae-f679-470b-9918-13eed4e2ece1</uuid>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <arch>x86_64</arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model>EPYC-Rome-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <vendor>AMD</vendor>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <microcode version='16777317'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <signature family='23' model='49' stepping='0'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='x2apic'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='tsc-deadline'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='osxsave'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='hypervisor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='tsc_adjust'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='spec-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='stibp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='arch-capabilities'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='cmp_legacy'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='topoext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='virt-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='lbrv'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='tsc-scale'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='vmcb-clean'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='pause-filter'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='pfthreshold'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='svme-addr-chk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='rdctl-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='mds-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature name='pschange-mc-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <pages unit='KiB' size='4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <pages unit='KiB' size='2048'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <pages unit='KiB' size='1048576'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <power_management>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <suspend_mem/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <suspend_disk/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <suspend_hybrid/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </power_management>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <iommu support='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <migration_features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <live/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <uri_transports>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <uri_transport>tcp</uri_transport>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <uri_transport>rdma</uri_transport>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </uri_transports>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </migration_features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <topology>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <cells num='1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <cell id='0'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           <memory unit='KiB'>7864296</memory>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           <pages unit='KiB' size='4'>1966074</pages>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           <distances>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <sibling id='0' value='10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           </distances>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           <cpus num='8'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:           </cpus>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         </cell>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </cells>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </topology>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <cache>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </cache>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <secmodel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model>selinux</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <doi>0</doi>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </secmodel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <secmodel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model>dac</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <doi>0</doi>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </secmodel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </host>
Jan 31 09:49:25 compute-0 nova_compute[184277]: 
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <guest>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <os_type>hvm</os_type>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <arch name='i686'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <wordsize>32</wordsize>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <domain type='qemu'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <domain type='kvm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <pae/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <nonpae/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <acpi default='on' toggle='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <apic default='on' toggle='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <cpuselection/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <deviceboot/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <disksnapshot default='on' toggle='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <externalSnapshot/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </guest>
Jan 31 09:49:25 compute-0 nova_compute[184277]: 
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <guest>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <os_type>hvm</os_type>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <arch name='x86_64'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <wordsize>64</wordsize>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <domain type='qemu'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <domain type='kvm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <acpi default='on' toggle='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <apic default='on' toggle='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <cpuselection/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <deviceboot/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <disksnapshot default='on' toggle='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <externalSnapshot/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </guest>
Jan 31 09:49:25 compute-0 nova_compute[184277]: 
Jan 31 09:49:25 compute-0 nova_compute[184277]: </capabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]: 
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.657 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.674 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 09:49:25 compute-0 nova_compute[184277]: <domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <domain>kvm</domain>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <arch>i686</arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <vcpu max='4096'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <iothreads supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <os supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='firmware'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <loader supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>rom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pflash</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='readonly'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>yes</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='secure'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </loader>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </os>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='maximumMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <vendor>AMD</vendor>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='succor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='custom' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <memoryBacking supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='sourceType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>anonymous</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>memfd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </memoryBacking>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <disk supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='diskDevice'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>disk</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cdrom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>floppy</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>lun</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>fdc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>sata</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </disk>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <graphics supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vnc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egl-headless</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </graphics>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <video supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='modelType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vga</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cirrus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>none</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>bochs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ramfb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </video>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hostdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='mode'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>subsystem</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='startupPolicy'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>mandatory</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>requisite</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>optional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='subsysType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pci</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='capsType'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='pciBackend'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hostdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <rng supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>random</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </rng>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <filesystem supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='driverType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>path</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>handle</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtiofs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </filesystem>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tpm supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-tis</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-crb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emulator</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>external</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendVersion'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>2.0</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </tpm>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <redirdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </redirdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <channel supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </channel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <crypto supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </crypto>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <interface supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>passt</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </interface>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <panic supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>isa</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>hyperv</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </panic>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <console supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>null</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dev</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pipe</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stdio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>udp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tcp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu-vdagent</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </console>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <gic supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <genid supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backup supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <async-teardown supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <s390-pv supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <ps2 supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tdx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sev supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sgx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hyperv supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='features'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>relaxed</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vapic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>spinlocks</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vpindex</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>runtime</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>synic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stimer</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reset</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vendor_id</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>frequencies</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reenlightenment</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tlbflush</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ipi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>avic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emsr_bitmap</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>xmm_input</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hyperv>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <launchSecurity supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </features>
Jan 31 09:49:25 compute-0 nova_compute[184277]: </domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.682 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 09:49:25 compute-0 nova_compute[184277]: <domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <domain>kvm</domain>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <arch>i686</arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <vcpu max='240'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <iothreads supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <os supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='firmware'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <loader supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>rom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pflash</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='readonly'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>yes</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='secure'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </loader>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </os>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='maximumMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <vendor>AMD</vendor>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='succor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='custom' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <memoryBacking supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='sourceType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>anonymous</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>memfd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </memoryBacking>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <disk supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='diskDevice'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>disk</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cdrom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>floppy</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>lun</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ide</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>fdc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>sata</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </disk>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <graphics supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vnc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egl-headless</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </graphics>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <video supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='modelType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vga</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cirrus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>none</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>bochs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ramfb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </video>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hostdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='mode'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>subsystem</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='startupPolicy'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>mandatory</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>requisite</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>optional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='subsysType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pci</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='capsType'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='pciBackend'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hostdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <rng supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>random</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </rng>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <filesystem supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='driverType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>path</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>handle</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtiofs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </filesystem>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tpm supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-tis</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-crb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emulator</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>external</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendVersion'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>2.0</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </tpm>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <redirdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </redirdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <channel supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </channel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <crypto supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </crypto>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <interface supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>passt</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </interface>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <panic supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>isa</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>hyperv</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </panic>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <console supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>null</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dev</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pipe</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stdio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>udp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tcp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu-vdagent</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </console>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <gic supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <genid supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backup supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <async-teardown supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <s390-pv supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <ps2 supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tdx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sev supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sgx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hyperv supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='features'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>relaxed</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vapic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>spinlocks</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vpindex</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>runtime</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>synic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stimer</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reset</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vendor_id</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>frequencies</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reenlightenment</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tlbflush</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ipi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>avic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emsr_bitmap</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>xmm_input</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hyperv>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <launchSecurity supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </features>
Jan 31 09:49:25 compute-0 nova_compute[184277]: </domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.736 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.741 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 09:49:25 compute-0 nova_compute[184277]: <domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <domain>kvm</domain>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <arch>x86_64</arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <vcpu max='4096'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <iothreads supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <os supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='firmware'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>efi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <loader supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>rom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pflash</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='readonly'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>yes</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='secure'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>yes</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </loader>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </os>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='maximumMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <vendor>AMD</vendor>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='succor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='custom' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <memoryBacking supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='sourceType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>anonymous</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>memfd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </memoryBacking>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <disk supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='diskDevice'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>disk</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cdrom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>floppy</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>lun</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>fdc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>sata</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </disk>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <graphics supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vnc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egl-headless</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </graphics>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <video supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='modelType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vga</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cirrus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>none</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>bochs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ramfb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </video>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hostdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='mode'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>subsystem</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='startupPolicy'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>mandatory</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>requisite</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>optional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='subsysType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pci</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='capsType'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='pciBackend'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hostdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <rng supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>random</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </rng>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <filesystem supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='driverType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>path</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>handle</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtiofs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </filesystem>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tpm supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-tis</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-crb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emulator</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>external</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendVersion'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>2.0</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </tpm>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <redirdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </redirdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <channel supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </channel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <crypto supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </crypto>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <interface supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>passt</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </interface>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <panic supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>isa</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>hyperv</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </panic>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <console supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>null</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dev</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pipe</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stdio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>udp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tcp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu-vdagent</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </console>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <gic supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <genid supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backup supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <async-teardown supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <s390-pv supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <ps2 supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tdx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sev supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sgx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hyperv supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='features'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>relaxed</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vapic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>spinlocks</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vpindex</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>runtime</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>synic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stimer</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reset</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vendor_id</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>frequencies</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reenlightenment</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tlbflush</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ipi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>avic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emsr_bitmap</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>xmm_input</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hyperv>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <launchSecurity supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </features>
Jan 31 09:49:25 compute-0 nova_compute[184277]: </domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.808 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 09:49:25 compute-0 nova_compute[184277]: <domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <domain>kvm</domain>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <arch>x86_64</arch>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <vcpu max='240'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <iothreads supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <os supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='firmware'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <loader supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>rom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pflash</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='readonly'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>yes</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='secure'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>no</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </loader>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </os>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='maximumMigratable'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>on</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>off</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <vendor>AMD</vendor>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='succor'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <mode name='custom' supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ddpd-u'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sha512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm3'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sm4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Denverton-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amd-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='auto-ibrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='perfmon-v2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbpb'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='stibp-always-on'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='EPYC-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 python3.9[185129]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-128'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-256'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx10-512'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='prefetchiti'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Haswell-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512er'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512pf'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fma4'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tbm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xop'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='amx-tile'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-bf16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-fp16'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bitalg'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrc'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fzrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='la57'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='taa-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ifma'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cmpccxadd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fbsdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='fsrs'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ibrs-all'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='intel-psfd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='lam'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mcdt-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pbrsb-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='psdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='serialize'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vaes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='hle'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='rtm'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512bw'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512cd'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512dq'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512f'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='avx512vl'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='invpcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pcid'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='pku'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='mpx'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='core-capability'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='split-lock-detect'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='cldemote'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='erms'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='gfni'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdir64b'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='movdiri'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='xsaves'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='athlon-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='core2duo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='coreduo-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='n270-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='ss'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <blockers model='phenom-v1'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnow'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <feature name='3dnowext'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </blockers>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </mode>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </cpu>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <memoryBacking supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <enum name='sourceType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>anonymous</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <value>memfd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </memoryBacking>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <disk supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='diskDevice'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>disk</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cdrom</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>floppy</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>lun</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ide</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>fdc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>sata</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </disk>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <graphics supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vnc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egl-headless</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </graphics>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <video supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='modelType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vga</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>cirrus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>none</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>bochs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ramfb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </video>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hostdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='mode'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>subsystem</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='startupPolicy'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>mandatory</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>requisite</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>optional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='subsysType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pci</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>scsi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='capsType'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='pciBackend'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hostdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <rng supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtio-non-transitional</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>random</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>egd</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </rng>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <filesystem supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='driverType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>path</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>handle</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>virtiofs</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </filesystem>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tpm supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-tis</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tpm-crb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emulator</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>external</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendVersion'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>2.0</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </tpm>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <redirdev supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='bus'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>usb</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </redirdev>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <channel supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </channel>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <crypto supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendModel'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>builtin</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </crypto>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <interface supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='backendType'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>default</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>passt</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </interface>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <panic supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='model'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>isa</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>hyperv</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </panic>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <console supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='type'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>null</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vc</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pty</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dev</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>file</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>pipe</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stdio</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>udp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tcp</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>unix</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>qemu-vdagent</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>dbus</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </console>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </devices>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   <features>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <gic supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <genid supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <backup supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <async-teardown supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <s390-pv supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <ps2 supported='yes'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <tdx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sev supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <sgx supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <hyperv supported='yes'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <enum name='features'>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>relaxed</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vapic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>spinlocks</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vpindex</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>runtime</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>synic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>stimer</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reset</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>vendor_id</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>frequencies</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>reenlightenment</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>tlbflush</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>ipi</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>avic</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>emsr_bitmap</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <value>xmm_input</value>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </enum>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       <defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:25 compute-0 nova_compute[184277]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:25 compute-0 nova_compute[184277]:       </defaults>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     </hyperv>
Jan 31 09:49:25 compute-0 nova_compute[184277]:     <launchSecurity supported='no'/>
Jan 31 09:49:25 compute-0 nova_compute[184277]:   </features>
Jan 31 09:49:25 compute-0 nova_compute[184277]: </domainCapabilities>
Jan 31 09:49:25 compute-0 nova_compute[184277]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.870 184281 DEBUG nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.870 184281 INFO nova.virt.libvirt.host [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Secure Boot support detected
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.872 184281 INFO nova.virt.libvirt.driver [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.872 184281 INFO nova.virt.libvirt.driver [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.880 184281 DEBUG nova.virt.libvirt.driver [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.922 184281 INFO nova.virt.node [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Determined node identity 1f8a458f-baaf-434f-841c-59d735622205 from /var/lib/nova/compute_id
Jan 31 09:49:25 compute-0 systemd[1]: Stopping nova_compute container...
Jan 31 09:49:25 compute-0 nova_compute[184277]: 2026-01-31 09:49:25.959 184281 WARNING nova.compute.manager [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Compute nodes ['1f8a458f-baaf-434f-841c-59d735622205'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 09:49:26 compute-0 nova_compute[184277]: 2026-01-31 09:49:26.006 184281 INFO nova.compute.manager [None req-9db17bb7-a87a-47b2-9fdd-387f3002f34b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 09:49:26 compute-0 nova_compute[184277]: 2026-01-31 09:49:26.013 184281 DEBUG oslo_concurrency.lockutils [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:49:26 compute-0 nova_compute[184277]: 2026-01-31 09:49:26.014 184281 DEBUG oslo_concurrency.lockutils [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:49:26 compute-0 nova_compute[184277]: 2026-01-31 09:49:26.014 184281 DEBUG oslo_concurrency.lockutils [None req-d2d5a59d-465c-45f7-9a90-51fb4c6e1a8d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:49:26 compute-0 virtqemud[184917]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 09:49:26 compute-0 virtqemud[184917]: hostname: compute-0
Jan 31 09:49:26 compute-0 virtqemud[184917]: End of file while reading data: Input/output error
Jan 31 09:49:26 compute-0 systemd[1]: libpod-cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550.scope: Deactivated successfully.
Jan 31 09:49:26 compute-0 systemd[1]: libpod-cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550.scope: Consumed 2.885s CPU time.
Jan 31 09:49:26 compute-0 podman[185137]: 2026-01-31 09:49:26.360981573 +0000 UTC m=+0.393276414 container died cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550-userdata-shm.mount: Deactivated successfully.
Jan 31 09:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5-merged.mount: Deactivated successfully.
Jan 31 09:49:26 compute-0 podman[185137]: 2026-01-31 09:49:26.421142518 +0000 UTC m=+0.453437379 container cleanup cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 09:49:26 compute-0 podman[185137]: nova_compute
Jan 31 09:49:26 compute-0 podman[185167]: nova_compute
Jan 31 09:49:26 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 09:49:26 compute-0 systemd[1]: Stopped nova_compute container.
Jan 31 09:49:26 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 09:49:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0239289ca31962f986d7314e2145d4dcf17dc5c281818d60245974e62d07dfd5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:26 compute-0 podman[185179]: 2026-01-31 09:49:26.603140486 +0000 UTC m=+0.100544289 container init cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:49:26 compute-0 podman[185179]: 2026-01-31 09:49:26.608307915 +0000 UTC m=+0.105711698 container start cd217c2f10d676c2202f0bf9233de79cfb27606a4d99e3b0db31caf2cc1d8550 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute)
Jan 31 09:49:26 compute-0 podman[185179]: nova_compute
Jan 31 09:49:26 compute-0 nova_compute[185194]: + sudo -E kolla_set_configs
Jan 31 09:49:26 compute-0 systemd[1]: Started nova_compute container.
Jan 31 09:49:26 compute-0 sudo[185127]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Validating config file
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying service configuration files
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /etc/ceph
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Creating directory /etc/ceph
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Writing out command to execute
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:26 compute-0 nova_compute[185194]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 09:49:26 compute-0 nova_compute[185194]: ++ cat /run_command
Jan 31 09:49:26 compute-0 nova_compute[185194]: + CMD=nova-compute
Jan 31 09:49:26 compute-0 nova_compute[185194]: + ARGS=
Jan 31 09:49:26 compute-0 nova_compute[185194]: + sudo kolla_copy_cacerts
Jan 31 09:49:26 compute-0 nova_compute[185194]: + [[ ! -n '' ]]
Jan 31 09:49:26 compute-0 nova_compute[185194]: + . kolla_extend_start
Jan 31 09:49:26 compute-0 nova_compute[185194]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 09:49:26 compute-0 nova_compute[185194]: Running command: 'nova-compute'
Jan 31 09:49:26 compute-0 nova_compute[185194]: + umask 0022
Jan 31 09:49:26 compute-0 nova_compute[185194]: + exec nova-compute
Jan 31 09:49:27 compute-0 sudo[185355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqfjecvtwhotjsppqwckjcigsjpfttax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852966.843674-1287-234160054558626/AnsiballZ_podman_container.py'
Jan 31 09:49:27 compute-0 sudo[185355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:27 compute-0 python3.9[185357]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 09:49:27 compute-0 systemd[1]: Started libpod-conmon-e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a.scope.
Jan 31 09:49:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb74844e5d79612371b76bb65eb3fc06802e3c6124c17b4ffc62a4daad3e973/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb74844e5d79612371b76bb65eb3fc06802e3c6124c17b4ffc62a4daad3e973/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb74844e5d79612371b76bb65eb3fc06802e3c6124c17b4ffc62a4daad3e973/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 09:49:27 compute-0 podman[185383]: 2026-01-31 09:49:27.590206645 +0000 UTC m=+0.123853784 container init e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 09:49:27 compute-0 podman[185383]: 2026-01-31 09:49:27.599586875 +0000 UTC m=+0.133234004 container start e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:49:27 compute-0 python3.9[185357]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 09:49:27 compute-0 nova_compute_init[185404]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 09:49:27 compute-0 systemd[1]: libpod-e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a.scope: Deactivated successfully.
Jan 31 09:49:27 compute-0 podman[185405]: 2026-01-31 09:49:27.65001539 +0000 UTC m=+0.025593949 container died e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:49:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a-userdata-shm.mount: Deactivated successfully.
Jan 31 09:49:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eb74844e5d79612371b76bb65eb3fc06802e3c6124c17b4ffc62a4daad3e973-merged.mount: Deactivated successfully.
Jan 31 09:49:27 compute-0 podman[185416]: 2026-01-31 09:49:27.73635969 +0000 UTC m=+0.074871390 container cleanup e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 09:49:27 compute-0 systemd[1]: libpod-conmon-e0a19c1345116e233b34fe45cf843589d18f72ef66ca8d255f98db0c1582287a.scope: Deactivated successfully.
Jan 31 09:49:27 compute-0 sudo[185355]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.463 185198 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.464 185198 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.464 185198 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.464 185198 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.584 185198 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:49:28 compute-0 sshd-session[162151]: Connection closed by 192.168.122.30 port 47542
Jan 31 09:49:28 compute-0 sshd-session[162148]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.594 185198 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:49:28 compute-0 nova_compute[185194]: 2026-01-31 09:49:28.595 185198 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 09:49:28 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 31 09:49:28 compute-0 systemd[1]: session-23.scope: Consumed 1min 25.778s CPU time.
Jan 31 09:49:28 compute-0 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Jan 31 09:49:28 compute-0 systemd-logind[795]: Removed session 23.
Jan 31 09:49:28 compute-0 podman[185470]: 2026-01-31 09:49:28.709115476 +0000 UTC m=+0.083573852 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.237 185198 INFO nova.virt.driver [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.331 185198 INFO nova.compute.provider_config [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.393 185198 DEBUG oslo_concurrency.lockutils [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.393 185198 DEBUG oslo_concurrency.lockutils [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.394 185198 DEBUG oslo_concurrency.lockutils [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.394 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.394 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.394 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.394 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.395 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.396 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.396 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.396 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.396 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.396 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.396 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.397 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.398 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.398 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.398 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.398 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.398 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.399 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.400 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.400 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.400 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.400 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.400 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.400 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.401 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.401 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.401 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.401 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.401 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.401 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.402 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.402 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.402 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.402 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.402 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.402 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.403 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.404 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.405 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.406 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.407 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.408 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.409 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.410 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.411 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.412 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.413 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.414 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.415 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.416 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.417 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.417 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.417 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.417 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.417 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.417 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.418 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.419 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.420 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.421 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.422 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.423 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.423 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.423 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.423 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.423 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.423 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.424 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.425 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.426 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.427 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.428 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.429 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.430 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.431 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.432 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.433 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.434 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.435 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.436 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.437 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.438 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.439 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.440 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.441 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.442 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.443 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.444 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.445 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.445 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.445 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.445 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.445 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.445 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.446 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.447 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.448 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.449 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.450 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.451 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.452 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.453 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.454 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.455 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.456 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.457 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.458 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.458 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.458 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.458 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.459 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.460 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.461 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.462 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.463 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.464 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.464 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.464 185198 WARNING oslo_config.cfg [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 09:49:29 compute-0 nova_compute[185194]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 09:49:29 compute-0 nova_compute[185194]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 09:49:29 compute-0 nova_compute[185194]: and ``live_migration_inbound_addr`` respectively.
Jan 31 09:49:29 compute-0 nova_compute[185194]: ).  Its value may be silently ignored in the future.
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.464 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.464 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.465 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.465 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.465 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.465 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.465 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.465 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.466 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.467 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.468 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.469 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.470 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.471 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.472 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.473 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.474 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.475 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.476 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.477 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.478 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.479 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.480 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.481 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.482 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.483 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.484 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.484 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.484 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.484 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.484 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.484 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.485 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.486 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.486 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.486 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.486 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.486 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.486 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.487 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.488 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.489 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.490 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.491 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.492 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.493 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.494 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.494 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.494 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.494 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.494 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.494 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.495 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.496 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.497 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.498 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.499 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.500 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.500 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.500 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.500 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.500 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.500 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.501 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.502 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.503 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.503 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.503 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.503 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.503 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.503 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.504 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.505 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.505 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.505 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.505 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.505 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.505 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.506 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.507 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.508 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.509 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.510 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.511 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.512 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.513 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.514 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.515 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.516 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.517 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.518 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.519 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.520 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.521 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.521 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.521 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.521 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.521 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.521 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.522 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.523 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.524 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.524 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.524 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.524 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.524 185198 DEBUG oslo_service.service [None req-788c7694-953e-4551-80f2-7b9bf61dbb39 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.525 185198 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.586 185198 INFO nova.virt.node [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Determined node identity 1f8a458f-baaf-434f-841c-59d735622205 from /var/lib/nova/compute_id
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.586 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.587 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.587 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.587 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.601 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f13636dfc40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.603 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f13636dfc40> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.604 185198 INFO nova.virt.libvirt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Connection event '1' reason 'None'
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.609 185198 INFO nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]: 
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <host>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <uuid>9990fbae-f679-470b-9918-13eed4e2ece1</uuid>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <arch>x86_64</arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model>EPYC-Rome-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <vendor>AMD</vendor>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <microcode version='16777317'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <signature family='23' model='49' stepping='0'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='x2apic'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='tsc-deadline'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='osxsave'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='hypervisor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='tsc_adjust'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='spec-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='stibp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='arch-capabilities'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='cmp_legacy'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='topoext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='virt-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='lbrv'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='tsc-scale'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='vmcb-clean'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='pause-filter'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='pfthreshold'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='svme-addr-chk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='rdctl-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='mds-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature name='pschange-mc-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <pages unit='KiB' size='4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <pages unit='KiB' size='2048'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <pages unit='KiB' size='1048576'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <power_management>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <suspend_mem/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <suspend_disk/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <suspend_hybrid/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </power_management>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <iommu support='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <migration_features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <live/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <uri_transports>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <uri_transport>tcp</uri_transport>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <uri_transport>rdma</uri_transport>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </uri_transports>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </migration_features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <topology>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <cells num='1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <cell id='0'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           <memory unit='KiB'>7864296</memory>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           <pages unit='KiB' size='4'>1966074</pages>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           <distances>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <sibling id='0' value='10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           </distances>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           <cpus num='8'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:           </cpus>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         </cell>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </cells>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </topology>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <cache>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </cache>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <secmodel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model>selinux</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <doi>0</doi>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </secmodel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <secmodel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model>dac</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <doi>0</doi>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </secmodel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </host>
Jan 31 09:49:29 compute-0 nova_compute[185194]: 
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <guest>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <os_type>hvm</os_type>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <arch name='i686'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <wordsize>32</wordsize>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <domain type='qemu'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <domain type='kvm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <pae/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <nonpae/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <acpi default='on' toggle='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <apic default='on' toggle='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <cpuselection/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <deviceboot/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <disksnapshot default='on' toggle='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <externalSnapshot/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </guest>
Jan 31 09:49:29 compute-0 nova_compute[185194]: 
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <guest>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <os_type>hvm</os_type>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <arch name='x86_64'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <wordsize>64</wordsize>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <domain type='qemu'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <domain type='kvm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <acpi default='on' toggle='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <apic default='on' toggle='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <cpuselection/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <deviceboot/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <disksnapshot default='on' toggle='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <externalSnapshot/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </guest>
Jan 31 09:49:29 compute-0 nova_compute[185194]: 
Jan 31 09:49:29 compute-0 nova_compute[185194]: </capabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]: 
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.616 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.620 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 09:49:29 compute-0 nova_compute[185194]: <domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <domain>kvm</domain>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <arch>i686</arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <vcpu max='4096'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <iothreads supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <os supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='firmware'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <loader supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>rom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pflash</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='readonly'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>yes</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='secure'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </loader>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </os>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='maximumMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <vendor>AMD</vendor>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='succor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='custom' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <memoryBacking supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='sourceType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>anonymous</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>memfd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </memoryBacking>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <disk supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='diskDevice'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>disk</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cdrom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>floppy</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>lun</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>fdc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>sata</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <graphics supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vnc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egl-headless</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </graphics>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <video supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='modelType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vga</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cirrus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>none</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>bochs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ramfb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </video>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hostdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='mode'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>subsystem</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='startupPolicy'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>mandatory</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>requisite</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>optional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='subsysType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pci</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='capsType'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='pciBackend'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hostdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <rng supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>random</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </rng>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <filesystem supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='driverType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>path</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>handle</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtiofs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </filesystem>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tpm supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-tis</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-crb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emulator</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>external</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendVersion'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>2.0</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </tpm>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <redirdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </redirdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <channel supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </channel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <crypto supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </crypto>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <interface supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>passt</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </interface>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <panic supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>isa</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>hyperv</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </panic>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <console supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>null</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dev</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pipe</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stdio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>udp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tcp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu-vdagent</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </console>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <gic supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <genid supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backup supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <async-teardown supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <s390-pv supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <ps2 supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tdx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sev supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sgx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hyperv supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='features'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>relaxed</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vapic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>spinlocks</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vpindex</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>runtime</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>synic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stimer</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reset</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vendor_id</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>frequencies</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reenlightenment</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tlbflush</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ipi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>avic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emsr_bitmap</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>xmm_input</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hyperv>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <launchSecurity supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </features>
Jan 31 09:49:29 compute-0 nova_compute[185194]: </domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.626 185198 DEBUG nova.virt.libvirt.volume.mount [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.626 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 09:49:29 compute-0 nova_compute[185194]: <domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <domain>kvm</domain>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <arch>i686</arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <vcpu max='240'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <iothreads supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <os supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='firmware'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <loader supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>rom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pflash</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='readonly'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>yes</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='secure'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </loader>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </os>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='maximumMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <vendor>AMD</vendor>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='succor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='custom' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <memoryBacking supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='sourceType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>anonymous</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>memfd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </memoryBacking>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <disk supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='diskDevice'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>disk</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cdrom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>floppy</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>lun</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ide</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>fdc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>sata</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <graphics supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vnc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egl-headless</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </graphics>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <video supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='modelType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vga</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cirrus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>none</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>bochs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ramfb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </video>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hostdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='mode'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>subsystem</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='startupPolicy'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>mandatory</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>requisite</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>optional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='subsysType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pci</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='capsType'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='pciBackend'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hostdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <rng supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>random</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </rng>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <filesystem supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='driverType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>path</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>handle</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtiofs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </filesystem>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tpm supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-tis</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-crb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emulator</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>external</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendVersion'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>2.0</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </tpm>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <redirdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </redirdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <channel supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </channel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <crypto supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </crypto>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <interface supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>passt</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </interface>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <panic supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>isa</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>hyperv</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </panic>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <console supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>null</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dev</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pipe</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stdio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>udp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tcp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu-vdagent</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </console>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <gic supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <genid supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backup supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <async-teardown supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <s390-pv supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <ps2 supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tdx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sev supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sgx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hyperv supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='features'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>relaxed</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vapic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>spinlocks</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vpindex</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>runtime</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>synic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stimer</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reset</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vendor_id</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>frequencies</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reenlightenment</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tlbflush</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ipi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>avic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emsr_bitmap</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>xmm_input</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hyperv>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <launchSecurity supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </features>
Jan 31 09:49:29 compute-0 nova_compute[185194]: </domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.685 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.689 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 09:49:29 compute-0 nova_compute[185194]: <domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <domain>kvm</domain>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <arch>x86_64</arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <vcpu max='4096'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <iothreads supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <os supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='firmware'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>efi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <loader supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>rom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pflash</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='readonly'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>yes</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='secure'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>yes</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </loader>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </os>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='maximumMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <vendor>AMD</vendor>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='succor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='custom' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <memoryBacking supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='sourceType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>anonymous</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>memfd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </memoryBacking>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <disk supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='diskDevice'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>disk</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cdrom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>floppy</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>lun</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>fdc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>sata</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <graphics supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vnc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egl-headless</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </graphics>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <video supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='modelType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vga</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cirrus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>none</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>bochs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ramfb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </video>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hostdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='mode'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>subsystem</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='startupPolicy'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>mandatory</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>requisite</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>optional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='subsysType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pci</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='capsType'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='pciBackend'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hostdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <rng supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>random</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </rng>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <filesystem supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='driverType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>path</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>handle</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtiofs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </filesystem>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tpm supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-tis</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-crb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emulator</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>external</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendVersion'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>2.0</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </tpm>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <redirdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </redirdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <channel supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </channel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <crypto supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </crypto>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <interface supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>passt</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </interface>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <panic supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>isa</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>hyperv</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </panic>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <console supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>null</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dev</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pipe</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stdio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>udp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tcp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu-vdagent</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </console>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <gic supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <genid supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backup supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <async-teardown supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <s390-pv supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <ps2 supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tdx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sev supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sgx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hyperv supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='features'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>relaxed</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vapic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>spinlocks</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vpindex</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>runtime</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>synic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stimer</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reset</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vendor_id</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>frequencies</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reenlightenment</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tlbflush</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ipi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>avic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emsr_bitmap</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>xmm_input</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hyperv>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <launchSecurity supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </features>
Jan 31 09:49:29 compute-0 nova_compute[185194]: </domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.784 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 09:49:29 compute-0 nova_compute[185194]: <domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <domain>kvm</domain>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <arch>x86_64</arch>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <vcpu max='240'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <iothreads supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <os supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='firmware'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <loader supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>rom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pflash</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='readonly'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>yes</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='secure'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>no</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </loader>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </os>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-passthrough' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='hostPassthroughMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='maximum' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='maximumMigratable'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>on</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>off</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='host-model' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <vendor>AMD</vendor>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='x2apic'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='hypervisor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='stibp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='overflow-recov'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='succor'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lbrv'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='tsc-scale'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='flushbyasid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pause-filter'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='pfthreshold'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <feature policy='disable' name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <mode name='custom' supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Broadwell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='ClearwaterForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ddpd-u'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sha512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm3'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sm4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Cooperlake-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Denverton-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Dhyana-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Milan-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Rome-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-Turin-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amd-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='auto-ibrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vp2intersect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fs-gs-base-ns'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibpb-brtype'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='no-nested-data-bp'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='null-sel-clr-base'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='perfmon-v2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbpb'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='srso-user-kernel-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='stibp-always-on'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='EPYC-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='GraniteRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-128'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-256'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx10-512'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='prefetchiti'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Haswell-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v6'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Icelake-Server-v7'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='IvyBridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='KnightsMill-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4fmaps'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-4vnniw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512er'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512pf'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G4-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Opteron_G5-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fma4'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tbm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xop'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SapphireRapids-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='amx-tile'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-bf16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-fp16'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512-vpopcntdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bitalg'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vbmi2'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrc'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fzrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='la57'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='taa-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='tsx-ldtrk'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='SierraForest-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ifma'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-ne-convert'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx-vnni-int8'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bhi-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='bus-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cmpccxadd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fbsdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='fsrs'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ibrs-all'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='intel-psfd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ipred-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='lam'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mcdt-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pbrsb-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='psdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rrsba-ctrl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='sbdr-ssdp-no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='serialize'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vaes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='vpclmulqdq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Client-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='hle'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='rtm'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Skylake-Server-v5'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512bw'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512cd'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512dq'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512f'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='avx512vl'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='invpcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pcid'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='pku'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='mpx'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v2'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v3'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='core-capability'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='split-lock-detect'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='Snowridge-v4'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='cldemote'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='erms'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='gfni'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdir64b'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='movdiri'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='xsaves'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='athlon-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='core2duo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='coreduo-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='n270-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='ss'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <blockers model='phenom-v1'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnow'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <feature name='3dnowext'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </blockers>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </mode>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </cpu>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <memoryBacking supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <enum name='sourceType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>anonymous</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <value>memfd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </memoryBacking>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <disk supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='diskDevice'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>disk</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cdrom</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>floppy</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>lun</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ide</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>fdc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>sata</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <graphics supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vnc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egl-headless</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </graphics>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <video supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='modelType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vga</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>cirrus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>none</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>bochs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ramfb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </video>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hostdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='mode'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>subsystem</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='startupPolicy'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>mandatory</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>requisite</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>optional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='subsysType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pci</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>scsi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='capsType'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='pciBackend'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hostdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <rng supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtio-non-transitional</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>random</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>egd</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </rng>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <filesystem supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='driverType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>path</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>handle</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>virtiofs</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </filesystem>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tpm supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-tis</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tpm-crb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emulator</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>external</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendVersion'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>2.0</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </tpm>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <redirdev supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='bus'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>usb</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </redirdev>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <channel supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </channel>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <crypto supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendModel'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>builtin</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </crypto>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <interface supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='backendType'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>default</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>passt</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </interface>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <panic supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='model'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>isa</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>hyperv</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </panic>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <console supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='type'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>null</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vc</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pty</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dev</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>file</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>pipe</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stdio</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>udp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tcp</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>unix</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>qemu-vdagent</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>dbus</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </console>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </devices>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   <features>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <gic supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <vmcoreinfo supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <genid supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backingStoreInput supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <backup supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <async-teardown supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <s390-pv supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <ps2 supported='yes'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <tdx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sev supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <sgx supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <hyperv supported='yes'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <enum name='features'>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>relaxed</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vapic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>spinlocks</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vpindex</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>runtime</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>synic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>stimer</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reset</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>vendor_id</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>frequencies</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>reenlightenment</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>tlbflush</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>ipi</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>avic</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>emsr_bitmap</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <value>xmm_input</value>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </enum>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       <defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <spinlocks>4095</spinlocks>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <stimer_direct>on</stimer_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 09:49:29 compute-0 nova_compute[185194]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 09:49:29 compute-0 nova_compute[185194]:       </defaults>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     </hyperv>
Jan 31 09:49:29 compute-0 nova_compute[185194]:     <launchSecurity supported='no'/>
Jan 31 09:49:29 compute-0 nova_compute[185194]:   </features>
Jan 31 09:49:29 compute-0 nova_compute[185194]: </domainCapabilities>
Jan 31 09:49:29 compute-0 nova_compute[185194]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.851 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.852 185198 INFO nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Secure Boot support detected
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.855 185198 INFO nova.virt.libvirt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.855 185198 INFO nova.virt.libvirt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.864 185198 DEBUG nova.virt.libvirt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.921 185198 INFO nova.virt.node [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Determined node identity 1f8a458f-baaf-434f-841c-59d735622205 from /var/lib/nova/compute_id
Jan 31 09:49:29 compute-0 nova_compute[185194]: 2026-01-31 09:49:29.958 185198 WARNING nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Compute nodes ['1f8a458f-baaf-434f-841c-59d735622205'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.024 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.097 185198 WARNING nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.097 185198 DEBUG oslo_concurrency.lockutils [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.097 185198 DEBUG oslo_concurrency.lockutils [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.098 185198 DEBUG oslo_concurrency.lockutils [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.098 185198 DEBUG nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:49:30 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 09:49:30 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 09:49:30 compute-0 rsyslogd[1004]: imjournal from <np0005603742:nova_compute>: begin to drop messages due to rate-limiting
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.398 185198 WARNING nova.virt.libvirt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.399 185198 DEBUG nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5981MB free_disk=72.63988494873047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.399 185198 DEBUG oslo_concurrency.lockutils [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.399 185198 DEBUG oslo_concurrency.lockutils [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.471 185198 WARNING nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] No compute node record for compute-0.ctlplane.example.com:1f8a458f-baaf-434f-841c-59d735622205: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 1f8a458f-baaf-434f-841c-59d735622205 could not be found.
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.524 185198 INFO nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 1f8a458f-baaf-434f-841c-59d735622205
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.635 185198 DEBUG nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:49:30 compute-0 nova_compute[185194]: 2026-01-31 09:49:30.636 185198 DEBUG nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.152 185198 INFO nova.scheduler.client.report [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [req-24fd4e98-d3fe-4df6-94d5-7b8f1fd2e874] Created resource provider record via placement API for resource provider with UUID 1f8a458f-baaf-434f-841c-59d735622205 and name compute-0.ctlplane.example.com.
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.642 185198 DEBUG nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 09:49:32 compute-0 nova_compute[185194]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.642 185198 INFO nova.virt.libvirt.host [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] kernel doesn't support AMD SEV
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.643 185198 DEBUG nova.compute.provider_tree [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.643 185198 DEBUG nova.virt.libvirt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.778 185198 DEBUG nova.scheduler.client.report [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Updated inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.778 185198 DEBUG nova.compute.provider_tree [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Updating resource provider 1f8a458f-baaf-434f-841c-59d735622205 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.778 185198 DEBUG nova.compute.provider_tree [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.889 185198 DEBUG nova.compute.provider_tree [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Updating resource provider 1f8a458f-baaf-434f-841c-59d735622205 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.939 185198 DEBUG nova.compute.resource_tracker [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.939 185198 DEBUG oslo_concurrency.lockutils [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:49:32 compute-0 nova_compute[185194]: 2026-01-31 09:49:32.940 185198 DEBUG nova.service [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 09:49:35 compute-0 nova_compute[185194]: 2026-01-31 09:49:35.655 185198 DEBUG nova.service [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 09:49:35 compute-0 nova_compute[185194]: 2026-01-31 09:49:35.656 185198 DEBUG nova.servicegroup.drivers.db [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 09:49:36 compute-0 sshd-session[185541]: Accepted publickey for zuul from 192.168.122.30 port 37990 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:49:36 compute-0 systemd-logind[795]: New session 25 of user zuul.
Jan 31 09:49:36 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 31 09:49:36 compute-0 sshd-session[185541]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:49:37 compute-0 python3.9[185694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:49:39 compute-0 sudo[185848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzsmeocqixrzuehwmztineqjtwcxthrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852978.5872195-31-204994843865517/AnsiballZ_systemd_service.py'
Jan 31 09:49:39 compute-0 sudo[185848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:39 compute-0 python3.9[185850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:49:39 compute-0 systemd[1]: Reloading.
Jan 31 09:49:39 compute-0 systemd-rc-local-generator[185877]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:49:39 compute-0 systemd-sysv-generator[185881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:49:39 compute-0 sudo[185848]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:40 compute-0 python3.9[186035]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:49:40 compute-0 network[186052]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:49:40 compute-0 network[186053]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:49:40 compute-0 network[186054]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:49:43 compute-0 sudo[186324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsznfsvxhmqgledzrcaemjsvfwwnhova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852983.5946748-50-103125310869915/AnsiballZ_systemd_service.py'
Jan 31 09:49:43 compute-0 sudo[186324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:44 compute-0 python3.9[186326]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:49:44 compute-0 sudo[186324]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:45 compute-0 sudo[186477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnudskoenmhgjxlxcbobkeyhgzaquqxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852984.5032005-60-59202311326266/AnsiballZ_file.py'
Jan 31 09:49:45 compute-0 sudo[186477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:46 compute-0 python3.9[186479]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:46 compute-0 sudo[186477]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:46 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:49:46 compute-0 sudo[186630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vleeolawddfjehdgdhefpcsvnagbnogn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852986.380512-68-39756762603452/AnsiballZ_file.py'
Jan 31 09:49:46 compute-0 sudo[186630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:46 compute-0 python3.9[186632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:46 compute-0 sudo[186630]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:47 compute-0 sudo[186782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdvrlgevoyevfaveofltlxmeefduggbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852987.096263-77-189502373190006/AnsiballZ_command.py'
Jan 31 09:49:47 compute-0 sudo[186782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:47 compute-0 python3.9[186784]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:49:47 compute-0 sudo[186782]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:48 compute-0 python3.9[186936]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:49:48 compute-0 sudo[187086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjdqzbescwpscwfduhpjfrtovkunamtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852988.7561796-95-209101139937359/AnsiballZ_systemd_service.py'
Jan 31 09:49:48 compute-0 sudo[187086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:49 compute-0 python3.9[187088]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:49:49 compute-0 systemd[1]: Reloading.
Jan 31 09:49:49 compute-0 systemd-rc-local-generator[187112]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:49:49 compute-0 systemd-sysv-generator[187115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:49:49 compute-0 sudo[187086]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:49 compute-0 sudo[187273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgkvdattgqtvofiguiizhizjwaqabmci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852989.7279966-103-88326065553376/AnsiballZ_command.py'
Jan 31 09:49:49 compute-0 sudo[187273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:50 compute-0 python3.9[187275]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:49:50 compute-0 sudo[187273]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:50 compute-0 sudo[187426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubwvjsotfckpqleklbxsscptixttxkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852990.6334395-112-30819743709831/AnsiballZ_file.py'
Jan 31 09:49:50 compute-0 sudo[187426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:51 compute-0 python3.9[187428]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:49:51 compute-0 sudo[187426]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:51 compute-0 podman[187552]: 2026-01-31 09:49:51.823407498 +0000 UTC m=+0.086443384 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:49:51 compute-0 python3.9[187587]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:49:52 compute-0 sudo[187749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uixuwzlnwxgdhrvgnlcbbzuevhcxdrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852992.168444-128-261360557466671/AnsiballZ_group.py'
Jan 31 09:49:52 compute-0 sudo[187749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:52 compute-0 python3.9[187751]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 31 09:49:52 compute-0 sudo[187749]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:53 compute-0 sudo[187901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmbosjllpubxhbqvvmlipfwkxbgkxada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852993.2031567-139-69271978668322/AnsiballZ_getent.py'
Jan 31 09:49:53 compute-0 sudo[187901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:53 compute-0 python3.9[187903]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 31 09:49:53 compute-0 sudo[187901]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:54 compute-0 sudo[188054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljqaqohtqofelmdxbcrvqfezewegaxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852993.9546123-147-78576596299246/AnsiballZ_group.py'
Jan 31 09:49:54 compute-0 sudo[188054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:54 compute-0 python3.9[188056]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 09:49:54 compute-0 groupadd[188057]: group added to /etc/group: name=ceilometer, GID=42405
Jan 31 09:49:54 compute-0 groupadd[188057]: group added to /etc/gshadow: name=ceilometer
Jan 31 09:49:54 compute-0 groupadd[188057]: new group: name=ceilometer, GID=42405
Jan 31 09:49:54 compute-0 sudo[188054]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:55 compute-0 sudo[188212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdhiqskikfphdpkijaglrsvyszssmmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769852994.6698718-155-259657005511410/AnsiballZ_user.py'
Jan 31 09:49:55 compute-0 sudo[188212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:49:55 compute-0 python3.9[188214]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 09:49:55 compute-0 useradd[188216]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 09:49:55 compute-0 useradd[188216]: add 'ceilometer' to group 'libvirt'
Jan 31 09:49:55 compute-0 useradd[188216]: add 'ceilometer' to shadow group 'libvirt'
Jan 31 09:49:55 compute-0 sudo[188212]: pam_unix(sudo:session): session closed for user root
Jan 31 09:49:56 compute-0 python3.9[188372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:57 compute-0 python3.9[188493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769852996.4163759-181-172204012621486/.source.conf _original_basename=ceilometer.conf follow=False checksum=806b21daa538a66a80669be8bf74c414d178dfbc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:57 compute-0 python3.9[188643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:58 compute-0 python3.9[188764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769852997.5741665-181-66191197604907/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:49:58 compute-0 podman[188888]: 2026-01-31 09:49:58.907055377 +0000 UTC m=+0.091726500 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:49:59 compute-0 python3.9[188922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:49:59 compute-0 python3.9[189061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769852998.6165957-181-233368682052077/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:00 compute-0 python3.9[189211]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:00 compute-0 python3.9[189363]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:01 compute-0 python3.9[189515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:01 compute-0 python3.9[189636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853001.0296626-240-109057155167959/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:02 compute-0 python3.9[189786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:03 compute-0 python3.9[189907]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853002.1077101-240-191179148605903/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:03 compute-0 python3.9[190057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:04 compute-0 python3.9[190178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853003.2171676-269-274321434417206/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:05 compute-0 python3.9[190328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:05 compute-0 python3.9[190449]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853004.7523055-285-144395594501441/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:06 compute-0 python3.9[190599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:06 compute-0 nova_compute[185194]: 2026-01-31 09:50:06.658 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:06 compute-0 python3.9[190720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853005.8459268-300-237409968878332/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:06 compute-0 nova_compute[185194]: 2026-01-31 09:50:06.801 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:07 compute-0 python3.9[190870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:08 compute-0 python3.9[190991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853006.9643083-315-155262934457009/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:08 compute-0 sudo[191141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktbnqjagwaruowjiparpwaqyqrkzfceq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853008.4797537-330-186581354810279/AnsiballZ_file.py'
Jan 31 09:50:08 compute-0 sudo[191141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:08 compute-0 python3.9[191143]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:08 compute-0 sudo[191141]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:09 compute-0 sudo[191293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtmlhueahjjumekzdcljybtcwdpmceo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853009.0745754-338-78971982923411/AnsiballZ_file.py'
Jan 31 09:50:09 compute-0 sudo[191293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:09 compute-0 python3.9[191295]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:09 compute-0 sudo[191293]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:10 compute-0 python3.9[191445]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:10 compute-0 python3.9[191597]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:11 compute-0 python3.9[191749]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:12 compute-0 sudo[191901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbvhhzzpnbjzehvkgpqsoihehkxoihxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853011.733569-370-76949733632520/AnsiballZ_file.py'
Jan 31 09:50:12 compute-0 sudo[191901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:12 compute-0 python3.9[191903]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:12 compute-0 sudo[191901]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:12 compute-0 sudo[192053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raclfyxwnihtmjrfaqzvydtoayqzcmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853012.388841-378-176127792183119/AnsiballZ_systemd_service.py'
Jan 31 09:50:12 compute-0 sudo[192053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:12 compute-0 python3.9[192055]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:50:13 compute-0 systemd[1]: Reloading.
Jan 31 09:50:13 compute-0 systemd-rc-local-generator[192083]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:50:13 compute-0 systemd-sysv-generator[192086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:50:13 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 31 09:50:13 compute-0 sudo[192053]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:13 compute-0 sudo[192243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffoascvxdmbjtsvngoyjgfkzjweyxzwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853013.6229236-387-104544001061392/AnsiballZ_stat.py'
Jan 31 09:50:13 compute-0 sudo[192243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:14 compute-0 python3.9[192245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:14 compute-0 sudo[192243]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:14 compute-0 sudo[192366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlooxkfazkxjyfwfrijimghejepkzwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853013.6229236-387-104544001061392/AnsiballZ_copy.py'
Jan 31 09:50:14 compute-0 sudo[192366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:14 compute-0 python3.9[192368]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853013.6229236-387-104544001061392/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:14 compute-0 sudo[192366]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:14 compute-0 sudo[192442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsqsjyecczzrjfdbmcfuvkpimhslkleg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853013.6229236-387-104544001061392/AnsiballZ_stat.py'
Jan 31 09:50:14 compute-0 sudo[192442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:15 compute-0 python3.9[192444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:15 compute-0 sudo[192442]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:15 compute-0 sudo[192565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvncbzywzywfjbjadnmodsrshkzkjbxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853013.6229236-387-104544001061392/AnsiballZ_copy.py'
Jan 31 09:50:15 compute-0 sudo[192565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:15 compute-0 python3.9[192567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853013.6229236-387-104544001061392/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:15 compute-0 sudo[192565]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:50:16.411 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:50:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:50:16.412 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:50:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:50:16.412 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:50:16 compute-0 sudo[192717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngfelxzdrhnwgydrkliwovteojmdtdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853016.4306846-419-227836767178985/AnsiballZ_file.py'
Jan 31 09:50:16 compute-0 sudo[192717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:16 compute-0 python3.9[192719]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:16 compute-0 sudo[192717]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:17 compute-0 sudo[192869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcpqfxqdzfiakeyjukzyzbxqqezjwzpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853017.0299044-427-30358923605180/AnsiballZ_file.py'
Jan 31 09:50:17 compute-0 sudo[192869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:17 compute-0 python3.9[192871]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:17 compute-0 sudo[192869]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:18 compute-0 sudo[193021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkfdplvupyntwsibymbkalenugezdpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853017.7148457-435-52900000849306/AnsiballZ_stat.py'
Jan 31 09:50:18 compute-0 sudo[193021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:18 compute-0 python3.9[193023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:18 compute-0 sudo[193021]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:18 compute-0 sudo[193144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqmtdoskglbcoqaqviuetdrbawyxhgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853017.7148457-435-52900000849306/AnsiballZ_copy.py'
Jan 31 09:50:18 compute-0 sudo[193144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:18 compute-0 python3.9[193146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853017.7148457-435-52900000849306/.source.json _original_basename=.d0k6sh40 follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:18 compute-0 sudo[193144]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:19 compute-0 python3.9[193296]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:21 compute-0 sudo[193717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzfatwsftypprdkjesziejqbirexjeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853020.9633574-475-159446702720897/AnsiballZ_container_config_data.py'
Jan 31 09:50:21 compute-0 sudo[193717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:21 compute-0 python3.9[193719]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 31 09:50:21 compute-0 sudo[193717]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:21 compute-0 podman[193720]: 2026-01-31 09:50:21.978043746 +0000 UTC m=+0.090245467 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:50:22 compute-0 sudo[193888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocqcileohhshlbeoplipgagmvzkcdlvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853022.131688-486-134098290637693/AnsiballZ_container_config_hash.py'
Jan 31 09:50:22 compute-0 sudo[193888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:22 compute-0 python3.9[193890]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:50:22 compute-0 sudo[193888]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:23 compute-0 sudo[194040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kequccbytfkqhaetfathxdherthelbzs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853023.1042979-496-244361601919105/AnsiballZ_edpm_container_manage.py'
Jan 31 09:50:23 compute-0 sudo[194040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:23 compute-0 python3[194042]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:50:24 compute-0 podman[194079]: 2026-01-31 09:50:24.111113158 +0000 UTC m=+0.031017347 image pull 29d56e26655b85dbb8adac3e1ab61f6d15a43ab7cc871b995898a25601dc084c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 31 09:50:24 compute-0 podman[194079]: 2026-01-31 09:50:24.698477612 +0000 UTC m=+0.618381761 container create 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6)
Jan 31 09:50:24 compute-0 python3[194042]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 31 09:50:24 compute-0 sudo[194040]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:25 compute-0 sudo[194266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tapwglpugnuzdnwoxedycqoeqrqcbgza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853024.9713345-504-117244125381262/AnsiballZ_stat.py'
Jan 31 09:50:25 compute-0 sudo[194266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:25 compute-0 python3.9[194268]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:25 compute-0 sudo[194266]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:25 compute-0 sudo[194420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qemouhvvhcjqledujsdvrgesmcezgtse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853025.682649-513-112647190614107/AnsiballZ_file.py'
Jan 31 09:50:25 compute-0 sudo[194420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:26 compute-0 python3.9[194422]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:26 compute-0 sudo[194420]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:26 compute-0 sudo[194496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndyouecpdnryouqletntnnjotwrqhxdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853025.682649-513-112647190614107/AnsiballZ_stat.py'
Jan 31 09:50:26 compute-0 sudo[194496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:26 compute-0 python3.9[194498]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:26 compute-0 sudo[194496]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:27 compute-0 sudo[194647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuydydsbdsvveakgdinjmqogwgzpkxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853026.6763673-513-79527934485451/AnsiballZ_copy.py'
Jan 31 09:50:27 compute-0 sudo[194647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:27 compute-0 python3.9[194649]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853026.6763673-513-79527934485451/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:27 compute-0 sudo[194647]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:27 compute-0 sudo[194723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aefkepbyjnkbcuidueqnqsvuswzdvhqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853026.6763673-513-79527934485451/AnsiballZ_systemd.py'
Jan 31 09:50:27 compute-0 sudo[194723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:28 compute-0 python3.9[194725]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:50:28 compute-0 systemd[1]: Reloading.
Jan 31 09:50:28 compute-0 systemd-rc-local-generator[194748]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:50:28 compute-0 systemd-sysv-generator[194752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:50:28 compute-0 sudo[194723]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:28 compute-0 sudo[194834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzmvjbzdcscsumxckvvupwehnjgacmix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853026.6763673-513-79527934485451/AnsiballZ_systemd.py'
Jan 31 09:50:28 compute-0 sudo[194834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.608 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.608 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.609 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.629 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.630 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.630 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.631 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.631 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.632 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.632 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.632 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.633 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.662 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.662 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.663 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.663 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.823 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.825 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5954MB free_disk=72.63866424560547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.825 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.825 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.913 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.914 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:50:28 compute-0 python3.9[194836]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.942 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.960 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.963 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:50:28 compute-0 nova_compute[185194]: 2026-01-31 09:50:28.964 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:50:28 compute-0 systemd[1]: Reloading.
Jan 31 09:50:29 compute-0 systemd-rc-local-generator[194891]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:50:29 compute-0 systemd-sysv-generator[194895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:50:29 compute-0 podman[194838]: 2026-01-31 09:50:29.092396451 +0000 UTC m=+0.128969391 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:50:29 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 31 09:50:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e8f026f0f99b2a6171e890c0d2c65971c87456f7b322e79985a673c44b4f4b/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e8f026f0f99b2a6171e890c0d2c65971c87456f7b322e79985a673c44b4f4b/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 31 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e8f026f0f99b2a6171e890c0d2c65971c87456f7b322e79985a673c44b4f4b/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 31 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e8f026f0f99b2a6171e890c0d2c65971c87456f7b322e79985a673c44b4f4b/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 31 09:50:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.
Jan 31 09:50:29 compute-0 podman[194901]: 2026-01-31 09:50:29.408364167 +0000 UTC m=+0.129079534 container init 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + sudo -E kolla_set_configs
Jan 31 09:50:29 compute-0 podman[194901]: 2026-01-31 09:50:29.429852553 +0000 UTC m=+0.150567830 container start 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6)
Jan 31 09:50:29 compute-0 sudo[194922]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: sudo: unable to send audit message: Operation not permitted
Jan 31 09:50:29 compute-0 sudo[194922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:50:29 compute-0 podman[194901]: ceilometer_agent_compute
Jan 31 09:50:29 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 31 09:50:29 compute-0 sudo[194834]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Validating config file
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Copying service configuration files
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: INFO:__main__:Writing out command to execute
Jan 31 09:50:29 compute-0 sudo[194922]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: ++ cat /run_command
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + ARGS=
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + sudo kolla_copy_cacerts
Jan 31 09:50:29 compute-0 sudo[194945]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: sudo: unable to send audit message: Operation not permitted
Jan 31 09:50:29 compute-0 sudo[194945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:50:29 compute-0 sudo[194945]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + [[ ! -n '' ]]
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + . kolla_extend_start
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + umask 0022
Jan 31 09:50:29 compute-0 ceilometer_agent_compute[194915]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 31 09:50:29 compute-0 podman[194923]: 2026-01-31 09:50:29.517942349 +0000 UTC m=+0.074733750 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126)
Jan 31 09:50:29 compute-0 systemd[1]: 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca-1e04999f4ac0b988.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:50:29 compute-0 systemd[1]: 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca-1e04999f4ac0b988.service: Failed with result 'exit-code'.
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.314 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.315 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.316 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.317 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.318 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.319 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 python3.9[195098]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.320 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.321 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.322 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.323 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.324 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.325 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.326 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.327 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.327 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.327 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.327 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.348 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.349 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.349 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.349 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.349 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.350 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.351 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.352 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.353 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.354 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.358 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.359 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.360 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.361 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.362 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.364 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.366 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.367 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.540 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.548 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.549 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.549 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.662 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.663 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.664 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.665 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.666 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.667 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.668 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.669 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.670 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.671 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.672 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.673 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.674 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.675 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.676 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.679 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.693 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.694 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.694 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.695 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.710 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:50:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:50:31 compute-0 sudo[195261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivyhqvktkbioymigchchnztcrwkfatrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853030.782637-558-241691774640049/AnsiballZ_stat.py'
Jan 31 09:50:31 compute-0 sudo[195261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:31 compute-0 python3.9[195263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:31 compute-0 sudo[195261]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:31 compute-0 sudo[195386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwwmogeslmvwkrdksvmiingcxslbhchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853030.782637-558-241691774640049/AnsiballZ_copy.py'
Jan 31 09:50:31 compute-0 sudo[195386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:31 compute-0 python3.9[195388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853030.782637-558-241691774640049/.source.yaml _original_basename=.uennb2l3 follow=False checksum=b1b5e9c6ef99d534175a46888c2979beb84a9a1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:31 compute-0 sudo[195386]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:32 compute-0 sudo[195538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzxdipsfvvjpbdxdonghkcpggkdoibw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853031.9081352-573-257719470642982/AnsiballZ_stat.py'
Jan 31 09:50:32 compute-0 sudo[195538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:32 compute-0 python3.9[195540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:32 compute-0 sudo[195538]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:32 compute-0 sudo[195661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzbeantbiuzkjcrwstbhdtnuzgtcpuez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853031.9081352-573-257719470642982/AnsiballZ_copy.py'
Jan 31 09:50:32 compute-0 sudo[195661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:32 compute-0 python3.9[195663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853031.9081352-573-257719470642982/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:32 compute-0 sudo[195661]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:33 compute-0 sudo[195813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbrxtzmssieafzlpdfdxougwrsvivks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853033.3313198-594-116495361350661/AnsiballZ_file.py'
Jan 31 09:50:33 compute-0 sudo[195813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:33 compute-0 python3.9[195815]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:33 compute-0 sudo[195813]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:34 compute-0 sudo[195965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgnffqiduechfjyetngiksdqnvuampzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853033.9142225-602-65340933988868/AnsiballZ_file.py'
Jan 31 09:50:34 compute-0 sudo[195965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:34 compute-0 python3.9[195967]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:34 compute-0 sudo[195965]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:34 compute-0 sudo[196117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfnnulkxrycsixqfqctljdgswrghplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853034.5328677-610-195226670054322/AnsiballZ_stat.py'
Jan 31 09:50:34 compute-0 sudo[196117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:34 compute-0 python3.9[196119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:34 compute-0 sudo[196117]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:35 compute-0 sudo[196195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjfwkpbvaqsxuhfolzpztsbemhbxwnre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853034.5328677-610-195226670054322/AnsiballZ_file.py'
Jan 31 09:50:35 compute-0 sudo[196195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:35 compute-0 python3.9[196197]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.mqunrf7l recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:35 compute-0 sudo[196195]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:36 compute-0 python3.9[196347]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:37 compute-0 sudo[196768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrgisvktvmyytcdcjprwcrliuydntmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853037.395194-647-132694507430426/AnsiballZ_container_config_data.py'
Jan 31 09:50:37 compute-0 sudo[196768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:37 compute-0 python3.9[196770]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 31 09:50:37 compute-0 sudo[196768]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:38 compute-0 sudo[196920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faiqhfhggbjbxtkembzmtxwonynramek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853038.1101747-658-26364854145740/AnsiballZ_container_config_hash.py'
Jan 31 09:50:38 compute-0 sudo[196920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:38 compute-0 python3.9[196922]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:50:38 compute-0 sudo[196920]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:39 compute-0 sudo[197072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbciblsicxyosvjgxxinibfkyqdhugjn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853038.8477695-668-39633462162279/AnsiballZ_edpm_container_manage.py'
Jan 31 09:50:39 compute-0 sudo[197072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:39 compute-0 python3[197074]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:50:39 compute-0 podman[197113]: 2026-01-31 09:50:39.509821212 +0000 UTC m=+0.017666310 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 31 09:50:39 compute-0 podman[197113]: 2026-01-31 09:50:39.713941182 +0000 UTC m=+0.221786250 container create 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter)
Jan 31 09:50:39 compute-0 python3[197074]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 31 09:50:39 compute-0 sudo[197072]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:40 compute-0 sudo[197302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpccbxlcjdhrtdcmwiftbegretzuxwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853040.0292163-676-216073097112446/AnsiballZ_stat.py'
Jan 31 09:50:40 compute-0 sudo[197302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:40 compute-0 python3.9[197304]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:40 compute-0 sudo[197302]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:41 compute-0 sudo[197456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhnrwaxcovlcfhdnmqznavdfexjanba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853040.789546-685-154169539649051/AnsiballZ_file.py'
Jan 31 09:50:41 compute-0 sudo[197456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:41 compute-0 python3.9[197458]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:41 compute-0 sudo[197456]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:41 compute-0 sudo[197532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umzamqyzibelkmiuktsgwryhiigcxsgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853040.789546-685-154169539649051/AnsiballZ_stat.py'
Jan 31 09:50:41 compute-0 sudo[197532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:41 compute-0 python3.9[197534]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:41 compute-0 sudo[197532]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:42 compute-0 sudo[197683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eexppuzdpobkzejbirobxnfghxoffkph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853041.9231246-685-131632700300762/AnsiballZ_copy.py'
Jan 31 09:50:42 compute-0 sudo[197683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:42 compute-0 python3.9[197685]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853041.9231246-685-131632700300762/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:42 compute-0 sudo[197683]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:42 compute-0 sudo[197759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hetwpmgqrmclvwqwhrtrvzzlliwpolzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853041.9231246-685-131632700300762/AnsiballZ_systemd.py'
Jan 31 09:50:42 compute-0 sudo[197759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:43 compute-0 python3.9[197761]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:50:43 compute-0 systemd[1]: Reloading.
Jan 31 09:50:43 compute-0 systemd-rc-local-generator[197786]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:50:43 compute-0 systemd-sysv-generator[197790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:50:43 compute-0 sudo[197759]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:43 compute-0 sudo[197870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mldmzmlqcjbplwvqvczllanrccucwlcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853041.9231246-685-131632700300762/AnsiballZ_systemd.py'
Jan 31 09:50:43 compute-0 sudo[197870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:43 compute-0 python3.9[197872]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:50:43 compute-0 systemd[1]: Reloading.
Jan 31 09:50:43 compute-0 systemd-sysv-generator[197903]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:50:43 compute-0 systemd-rc-local-generator[197899]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:50:44 compute-0 systemd[1]: Starting node_exporter container...
Jan 31 09:50:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba91aa7c2b6512943490842eea5bf355b19961cab6a734ebc63e2ed730fb0e6a/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 09:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba91aa7c2b6512943490842eea5bf355b19961cab6a734ebc63e2ed730fb0e6a/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 31 09:50:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.
Jan 31 09:50:44 compute-0 podman[197911]: 2026-01-31 09:50:44.267500665 +0000 UTC m=+0.140985509 container init 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.283Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.283Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.283Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.284Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.284Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.284Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.284Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=arp
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=bcache
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=bonding
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=cpu
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=edac
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=filefd
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=netclass
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=netdev
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=netstat
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=nfs
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=nvme
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=softnet
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=systemd
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=xfs
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.285Z caller=node_exporter.go:117 level=info collector=zfs
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.286Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 31 09:50:44 compute-0 node_exporter[197926]: ts=2026-01-31T09:50:44.286Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 31 09:50:44 compute-0 podman[197911]: 2026-01-31 09:50:44.30061303 +0000 UTC m=+0.174097834 container start 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 09:50:44 compute-0 podman[197911]: node_exporter
Jan 31 09:50:44 compute-0 systemd[1]: Started node_exporter container.
Jan 31 09:50:44 compute-0 sudo[197870]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:44 compute-0 podman[197936]: 2026-01-31 09:50:44.420476152 +0000 UTC m=+0.106569728 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 09:50:45 compute-0 python3.9[198110]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:50:45 compute-0 sudo[198260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwmcyagnjqqvmpnghecjmujvagvmgrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853045.6485662-730-125733018294724/AnsiballZ_stat.py'
Jan 31 09:50:45 compute-0 sudo[198260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:46 compute-0 python3.9[198262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:46 compute-0 sudo[198260]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:46 compute-0 sudo[198385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpljamijgqtidauyeucygvqihapsazfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853045.6485662-730-125733018294724/AnsiballZ_copy.py'
Jan 31 09:50:46 compute-0 sudo[198385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:46 compute-0 python3.9[198387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853045.6485662-730-125733018294724/.source.yaml _original_basename=.seolz89d follow=False checksum=7ef8deb66d27ef258fc8549ee239496789b739bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:46 compute-0 sudo[198385]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:46 compute-0 sudo[198537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbgmzhagdsdmjnntcnvfaalohbissdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853046.733677-745-128391437462047/AnsiballZ_stat.py'
Jan 31 09:50:46 compute-0 sudo[198537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:47 compute-0 python3.9[198539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:47 compute-0 sudo[198537]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:47 compute-0 sudo[198660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxiyceeckjtefkhbarrpyenfytrsmfhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853046.733677-745-128391437462047/AnsiballZ_copy.py'
Jan 31 09:50:47 compute-0 sudo[198660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:47 compute-0 python3.9[198662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853046.733677-745-128391437462047/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:47 compute-0 sudo[198660]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:48 compute-0 sudo[198812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eesbmorrqxoetrtkcekoeummwmmylxot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853048.155265-766-203322571671827/AnsiballZ_file.py'
Jan 31 09:50:48 compute-0 sudo[198812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:48 compute-0 python3.9[198814]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:48 compute-0 sudo[198812]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:49 compute-0 sudo[198964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlgsywxqvxacskoocuwargejlqaltuxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853048.8278205-774-198268509582270/AnsiballZ_file.py'
Jan 31 09:50:49 compute-0 sudo[198964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:49 compute-0 python3.9[198966]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:50:49 compute-0 sudo[198964]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:50 compute-0 sudo[199116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceiyzwlcijhdwjpicxsbmtnbmkwtfqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853049.756767-782-48247490179433/AnsiballZ_stat.py'
Jan 31 09:50:50 compute-0 sudo[199116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:50 compute-0 python3.9[199118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:50:50 compute-0 sudo[199116]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:50 compute-0 sudo[199194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyprzeuzvljnpirdnkkklrkujwlwpcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853049.756767-782-48247490179433/AnsiballZ_file.py'
Jan 31 09:50:50 compute-0 sudo[199194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:50 compute-0 python3.9[199196]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.kb1tcbmp recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:50 compute-0 sudo[199194]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:51 compute-0 python3.9[199346]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:52 compute-0 podman[199591]: 2026-01-31 09:50:52.239922543 +0000 UTC m=+0.048037037 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:50:52 compute-0 sudo[199785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdimgbapbdfkokwvnywjkbptdfhvzvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853052.7270753-819-116526000165837/AnsiballZ_container_config_data.py'
Jan 31 09:50:52 compute-0 sudo[199785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:53 compute-0 python3.9[199787]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 31 09:50:53 compute-0 sudo[199785]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:53 compute-0 sudo[199937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fprmeoydjottekmrdjqvqbmjlyaksups ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853053.501339-830-242220487222720/AnsiballZ_container_config_hash.py'
Jan 31 09:50:53 compute-0 sudo[199937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:54 compute-0 python3.9[199939]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:50:54 compute-0 sudo[199937]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:54 compute-0 sudo[200089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvkvgqdmtwizhtmidyugdffitzsbtfpu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853054.4622462-840-207035942701456/AnsiballZ_edpm_container_manage.py'
Jan 31 09:50:54 compute-0 sudo[200089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:55 compute-0 python3[200091]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:50:57 compute-0 podman[200104]: 2026-01-31 09:50:57.422828354 +0000 UTC m=+2.371653094 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 31 09:50:57 compute-0 podman[200199]: 2026-01-31 09:50:57.598072049 +0000 UTC m=+0.100058054 container create 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter)
Jan 31 09:50:57 compute-0 podman[200199]: 2026-01-31 09:50:57.522365473 +0000 UTC m=+0.024351518 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 31 09:50:57 compute-0 python3[200091]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 31 09:50:57 compute-0 sudo[200089]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:58 compute-0 sudo[200387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqowivveonosvvpmowqfctresfzofip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853058.1964204-848-83671022635767/AnsiballZ_stat.py'
Jan 31 09:50:58 compute-0 sudo[200387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:58 compute-0 python3.9[200389]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:58 compute-0 sudo[200387]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:59 compute-0 sudo[200541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buqdgdqbdykdtqcniknyoiahxlnuznia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853058.8484852-857-238947253952164/AnsiballZ_file.py'
Jan 31 09:50:59 compute-0 sudo[200541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:59 compute-0 python3.9[200543]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:50:59 compute-0 sudo[200541]: pam_unix(sudo:session): session closed for user root
Jan 31 09:50:59 compute-0 sudo[200626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqkfeznigpykwikdafjxmseuklxjaea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853058.8484852-857-238947253952164/AnsiballZ_stat.py'
Jan 31 09:50:59 compute-0 sudo[200626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:50:59 compute-0 podman[200591]: 2026-01-31 09:50:59.584602286 +0000 UTC m=+0.080482973 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:50:59 compute-0 podman[200631]: 2026-01-31 09:50:59.630108349 +0000 UTC m=+0.065791777 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 09:50:59 compute-0 systemd[1]: 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca-1e04999f4ac0b988.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:50:59 compute-0 systemd[1]: 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca-1e04999f4ac0b988.service: Failed with result 'exit-code'.
Jan 31 09:50:59 compute-0 python3.9[200638]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:50:59 compute-0 sudo[200626]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:00 compute-0 sudo[200812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtgvtwxwhvtwytcmbxksdptbiwivroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853059.8265634-857-140234861159323/AnsiballZ_copy.py'
Jan 31 09:51:00 compute-0 sudo[200812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:00 compute-0 python3.9[200814]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853059.8265634-857-140234861159323/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:00 compute-0 sudo[200812]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:00 compute-0 sudo[200888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscocqzqebwsvmvympoqzimbyqlsfrzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853059.8265634-857-140234861159323/AnsiballZ_systemd.py'
Jan 31 09:51:00 compute-0 sudo[200888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:00 compute-0 python3.9[200890]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:51:01 compute-0 systemd[1]: Reloading.
Jan 31 09:51:01 compute-0 systemd-sysv-generator[200915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:51:01 compute-0 systemd-rc-local-generator[200909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:51:01 compute-0 sudo[200888]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:01 compute-0 sudo[200999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqabjshvtnbwrymrsihqimmsnpanucsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853059.8265634-857-140234861159323/AnsiballZ_systemd.py'
Jan 31 09:51:01 compute-0 sudo[200999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:01 compute-0 python3.9[201001]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:51:02 compute-0 systemd[1]: Reloading.
Jan 31 09:51:02 compute-0 systemd-rc-local-generator[201022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:51:02 compute-0 systemd-sysv-generator[201030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:51:02 compute-0 systemd[1]: Starting podman_exporter container...
Jan 31 09:51:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29e8b313170c8d1a3574a5d5aa22762f74ee2d58ee2cf1632349edc82f983ce/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 09:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29e8b313170c8d1a3574a5d5aa22762f74ee2d58ee2cf1632349edc82f983ce/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 31 09:51:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.
Jan 31 09:51:02 compute-0 podman[201042]: 2026-01-31 09:51:02.660980604 +0000 UTC m=+0.385252761 container init 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.682Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.682Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.682Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.683Z caller=handler.go:105 level=info collector=container
Jan 31 09:51:02 compute-0 podman[201042]: 2026-01-31 09:51:02.685421775 +0000 UTC m=+0.409693932 container start 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 09:51:02 compute-0 systemd[1]: Starting Podman API Service...
Jan 31 09:51:02 compute-0 systemd[1]: Started Podman API Service.
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="Setting parallel job count to 25"
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="Using sqlite as database backend"
Jan 31 09:51:02 compute-0 podman[201042]: podman_exporter
Jan 31 09:51:02 compute-0 systemd[1]: Started podman_exporter container.
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 31 09:51:02 compute-0 podman[201068]: @ - - [31/Jan/2026:09:51:02 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 31 09:51:02 compute-0 podman[201068]: time="2026-01-31T09:51:02Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:51:02 compute-0 podman[201068]: @ - - [31/Jan/2026:09:51:02 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18095 "" "Go-http-client/1.1"
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.887Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.887Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 31 09:51:02 compute-0 podman_exporter[201055]: ts=2026-01-31T09:51:02.887Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 31 09:51:02 compute-0 sudo[200999]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:02 compute-0 podman[201066]: 2026-01-31 09:51:02.907297356 +0000 UTC m=+0.210141001 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 09:51:02 compute-0 systemd[1]: 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e-175bc3f37e92dc6.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:51:02 compute-0 systemd[1]: 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e-175bc3f37e92dc6.service: Failed with result 'exit-code'.
Jan 31 09:51:03 compute-0 python3.9[201251]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:51:04 compute-0 sudo[201401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqzpvgkwtfifnuigxobnxykzjwwtzcwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853064.159973-902-237301902710214/AnsiballZ_stat.py'
Jan 31 09:51:04 compute-0 sudo[201401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:04 compute-0 python3.9[201403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:04 compute-0 sudo[201401]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:04 compute-0 sudo[201526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivcfdrxdmwnhdiqorerrlrcfbnfqhqxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853064.159973-902-237301902710214/AnsiballZ_copy.py'
Jan 31 09:51:04 compute-0 sudo[201526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:05 compute-0 python3.9[201528]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853064.159973-902-237301902710214/.source.yaml _original_basename=.1f3eoxlz follow=False checksum=f7f4221e316b198d21fd1c303e9ba01f683d7a2a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:05 compute-0 sudo[201526]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:05 compute-0 sudo[201678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lghijncfblvfhbcgkzkxqczdqmtvlnap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853065.344032-917-152284370966092/AnsiballZ_stat.py'
Jan 31 09:51:05 compute-0 sudo[201678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:05 compute-0 python3.9[201680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:05 compute-0 sudo[201678]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:06 compute-0 sudo[201801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfcwhokyycrvevzgdhvahdhhukesutml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853065.344032-917-152284370966092/AnsiballZ_copy.py'
Jan 31 09:51:06 compute-0 sudo[201801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:06 compute-0 python3.9[201803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853065.344032-917-152284370966092/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:51:06 compute-0 sudo[201801]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:07 compute-0 sudo[201953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpocgancfimcfcilflmovcajkwmfjbfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853066.9562297-938-179887451392635/AnsiballZ_file.py'
Jan 31 09:51:07 compute-0 sudo[201953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:07 compute-0 python3.9[201955]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:07 compute-0 sudo[201953]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:07 compute-0 sudo[202105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aahfbmdvdoisxwklfmvyooxbfgbwxqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853067.6825135-946-21564930481572/AnsiballZ_file.py'
Jan 31 09:51:07 compute-0 sudo[202105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:08 compute-0 python3.9[202107]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:51:08 compute-0 sudo[202105]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:08 compute-0 sudo[202257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhmxdytfllbgbplnlkwpqqwpudqynyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853068.282371-954-250671554250170/AnsiballZ_stat.py'
Jan 31 09:51:08 compute-0 sudo[202257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:08 compute-0 python3.9[202259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:08 compute-0 sudo[202257]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:08 compute-0 sudo[202335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgdlgsgkibegyujsvvhswfpzdsajysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853068.282371-954-250671554250170/AnsiballZ_file.py'
Jan 31 09:51:08 compute-0 sudo[202335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:09 compute-0 python3.9[202337]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.by7jgv2w recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:09 compute-0 sudo[202335]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:09 compute-0 python3.9[202487]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:11 compute-0 auditd[701]: Audit daemon rotating log files
Jan 31 09:51:11 compute-0 sudo[202908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjvpkztjlheazkfmiwkaabdaunfagesw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853071.0429473-991-259921869271547/AnsiballZ_container_config_data.py'
Jan 31 09:51:11 compute-0 sudo[202908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:11 compute-0 python3.9[202910]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 31 09:51:11 compute-0 sudo[202908]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:12 compute-0 sudo[203060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bekrynffwgrmiotsdshwejdxyqpwoqcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853071.8737488-1002-53848513030276/AnsiballZ_container_config_hash.py'
Jan 31 09:51:12 compute-0 sudo[203060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:12 compute-0 python3.9[203062]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:51:12 compute-0 sudo[203060]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:12 compute-0 sudo[203212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbmwekdobeoshcftntmakaqlssojjec ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853072.6609108-1012-194711728672672/AnsiballZ_edpm_container_manage.py'
Jan 31 09:51:12 compute-0 sudo[203212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:13 compute-0 python3[203214]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:51:15 compute-0 podman[203272]: 2026-01-31 09:51:15.019758779 +0000 UTC m=+0.147362973 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 09:51:15 compute-0 podman[203228]: 2026-01-31 09:51:15.5206183 +0000 UTC m=+2.333829487 image pull 2679468753c61ac8a0e14904b347eedc3a9181a15e3bff0987683c22e1f9cae7 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 31 09:51:15 compute-0 podman[203350]: 2026-01-31 09:51:15.686394021 +0000 UTC m=+0.076071804 container create df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.7, distribution-scope=public, release=1769056855, io.buildah.version=1.33.7)
Jan 31 09:51:15 compute-0 podman[203350]: 2026-01-31 09:51:15.635427482 +0000 UTC m=+0.025105285 image pull 2679468753c61ac8a0e14904b347eedc3a9181a15e3bff0987683c22e1f9cae7 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 31 09:51:15 compute-0 python3[203214]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 31 09:51:15 compute-0 sudo[203212]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:16 compute-0 sudo[203538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbsucibxyspfkhirambsftckgsypddh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853075.937041-1020-188146141845089/AnsiballZ_stat.py'
Jan 31 09:51:16 compute-0 sudo[203538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:16 compute-0 python3.9[203540]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:51:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:51:16.411 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:51:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:51:16.412 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:51:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:51:16.412 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:51:16 compute-0 sudo[203538]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:16 compute-0 sudo[203692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptnarzpxmhooujfzfzaagtikooiyyiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853076.64709-1029-221450139157290/AnsiballZ_file.py'
Jan 31 09:51:16 compute-0 sudo[203692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:17 compute-0 python3.9[203694]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:17 compute-0 sudo[203692]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:17 compute-0 sudo[203768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdbiyjmrwfbqbklrugqvnqafdiwaiauh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853076.64709-1029-221450139157290/AnsiballZ_stat.py'
Jan 31 09:51:17 compute-0 sudo[203768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:17 compute-0 python3.9[203770]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:51:17 compute-0 sudo[203768]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:17 compute-0 sudo[203919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glyjtxazddpeiqruejtqxpebtawdobtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853077.5920577-1029-157401260506621/AnsiballZ_copy.py'
Jan 31 09:51:17 compute-0 sudo[203919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:18 compute-0 python3.9[203921]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853077.5920577-1029-157401260506621/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:18 compute-0 sudo[203919]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:18 compute-0 sudo[203995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobipevqnimjtwciiqtcfqwkagwwtpyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853077.5920577-1029-157401260506621/AnsiballZ_systemd.py'
Jan 31 09:51:18 compute-0 sudo[203995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:18 compute-0 python3.9[203997]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:51:18 compute-0 systemd[1]: Reloading.
Jan 31 09:51:18 compute-0 systemd-sysv-generator[204022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:51:18 compute-0 systemd-rc-local-generator[204018]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:51:19 compute-0 sudo[203995]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:19 compute-0 sudo[204106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbrwglhadfqgewaqwiokrqyardyyykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853077.5920577-1029-157401260506621/AnsiballZ_systemd.py'
Jan 31 09:51:19 compute-0 sudo[204106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:19 compute-0 python3.9[204108]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:51:19 compute-0 systemd[1]: Reloading.
Jan 31 09:51:19 compute-0 systemd-rc-local-generator[204137]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:51:19 compute-0 systemd-sysv-generator[204140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:51:19 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 31 09:51:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17aff3130ea3f39555a2f1710c63a40e34f10e313f15b93a81b83338e3239319/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 09:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17aff3130ea3f39555a2f1710c63a40e34f10e313f15b93a81b83338e3239319/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 09:51:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17aff3130ea3f39555a2f1710c63a40e34f10e313f15b93a81b83338e3239319/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 31 09:51:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.
Jan 31 09:51:20 compute-0 podman[204147]: 2026-01-31 09:51:20.046557907 +0000 UTC m=+0.111943438 container init df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, release=1769056855, vcs-type=git, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-22T05:09:47Z, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *bridge.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *coverage.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *datapath.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *iface.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *memory.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *ovn.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *pmd_perf.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *pmd_rxq.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: INFO    09:51:20 main.go:48: registering *vswitch.Collector
Jan 31 09:51:20 compute-0 openstack_network_exporter[204162]: NOTICE  09:51:20 main.go:76: listening on https://:9105/metrics
Jan 31 09:51:20 compute-0 podman[204147]: 2026-01-31 09:51:20.067280022 +0000 UTC m=+0.132665553 container start df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, managed_by=edpm_ansible, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 09:51:20 compute-0 podman[204147]: openstack_network_exporter
Jan 31 09:51:20 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 31 09:51:20 compute-0 sudo[204106]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:20 compute-0 podman[204172]: 2026-01-31 09:51:20.135505673 +0000 UTC m=+0.054778771 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-01-22T05:09:47Z, distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.7, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 09:51:20 compute-0 python3.9[204344]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:51:21 compute-0 sudo[204494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkfysfnrygketftktcchdppexxgachvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853081.263298-1074-7153482915876/AnsiballZ_stat.py'
Jan 31 09:51:21 compute-0 sudo[204494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:21 compute-0 python3.9[204496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:21 compute-0 sudo[204494]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:22 compute-0 sudo[204619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwussyxjkkuqwqzdgniwwkbvxvopqrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853081.263298-1074-7153482915876/AnsiballZ_copy.py'
Jan 31 09:51:22 compute-0 sudo[204619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:22 compute-0 python3.9[204621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853081.263298-1074-7153482915876/.source.yaml _original_basename=.u7hnvdns follow=False checksum=0253d76bdbded4fa6930e2b246bf750ea20d3274 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:22 compute-0 sudo[204619]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:22 compute-0 podman[204622]: 2026-01-31 09:51:22.357532473 +0000 UTC m=+0.071367831 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 09:51:22 compute-0 sudo[204791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixaksdzzcgwdwajpmavavcyogdzduowo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853082.4424968-1089-224822940129365/AnsiballZ_find.py'
Jan 31 09:51:22 compute-0 sudo[204791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:22 compute-0 python3.9[204793]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:51:22 compute-0 sudo[204791]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:23 compute-0 sudo[204943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audlabpywekxkmhsgdwuuvnfeaqpzzbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853083.2426734-1099-229021359077377/AnsiballZ_podman_container_info.py'
Jan 31 09:51:23 compute-0 sudo[204943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:23 compute-0 python3.9[204945]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 31 09:51:23 compute-0 sudo[204943]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:24 compute-0 sudo[205107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmiknytffuriyanwiyqikzszdszdjekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853084.0559711-1107-120712510092840/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:24 compute-0 sudo[205107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:24 compute-0 python3.9[205109]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:24 compute-0 systemd[1]: Started libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope.
Jan 31 09:51:24 compute-0 podman[205110]: 2026-01-31 09:51:24.819329257 +0000 UTC m=+0.124254452 container exec 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:51:24 compute-0 podman[205130]: 2026-01-31 09:51:24.892456874 +0000 UTC m=+0.052783119 container exec_died 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 09:51:24 compute-0 podman[205110]: 2026-01-31 09:51:24.899904001 +0000 UTC m=+0.204829216 container exec_died 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:51:24 compute-0 systemd[1]: libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope: Deactivated successfully.
Jan 31 09:51:24 compute-0 sudo[205107]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:25 compute-0 sudo[205292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdemfclwseuwhdkflimnkrauciqmixqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853085.0756514-1115-79199100753478/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:25 compute-0 sudo[205292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:25 compute-0 python3.9[205294]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:25 compute-0 systemd[1]: Started libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope.
Jan 31 09:51:25 compute-0 podman[205295]: 2026-01-31 09:51:25.544726571 +0000 UTC m=+0.065613021 container exec 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 09:51:25 compute-0 podman[205315]: 2026-01-31 09:51:25.606378658 +0000 UTC m=+0.051656942 container exec_died 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:51:25 compute-0 podman[205295]: 2026-01-31 09:51:25.611049931 +0000 UTC m=+0.131936111 container exec_died 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 09:51:25 compute-0 systemd[1]: libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope: Deactivated successfully.
Jan 31 09:51:25 compute-0 sudo[205292]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:26 compute-0 sudo[205477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awqtrgoccucqdyqcjcbrlwxwtshdddsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853085.793256-1123-199429082809436/AnsiballZ_file.py'
Jan 31 09:51:26 compute-0 sudo[205477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:26 compute-0 python3.9[205479]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:26 compute-0 sudo[205477]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:26 compute-0 sudo[205629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvvncqblyoyewtjusawtywkobryxfghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853086.4919975-1132-266944023565704/AnsiballZ_podman_container_info.py'
Jan 31 09:51:26 compute-0 sudo[205629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:26 compute-0 python3.9[205631]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 31 09:51:26 compute-0 sudo[205629]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:27 compute-0 sudo[205794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wutjtfewctfjpngdshgvielwzinasacd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853087.1333513-1140-18096161584146/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:27 compute-0 sudo[205794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:27 compute-0 python3.9[205796]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:27 compute-0 systemd[1]: Started libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope.
Jan 31 09:51:27 compute-0 podman[205797]: 2026-01-31 09:51:27.723427913 +0000 UTC m=+0.082503146 container exec 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 09:51:27 compute-0 podman[205797]: 2026-01-31 09:51:27.757625169 +0000 UTC m=+0.116700372 container exec_died 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 09:51:27 compute-0 systemd[1]: libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope: Deactivated successfully.
Jan 31 09:51:27 compute-0 sudo[205794]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:28 compute-0 sudo[205976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzoxfmmibkwgmkfnxgfcoplouzdgmwov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853088.0078955-1148-214921617762716/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:28 compute-0 sudo[205976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:28 compute-0 python3.9[205978]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:28 compute-0 systemd[1]: Started libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope.
Jan 31 09:51:28 compute-0 podman[205979]: 2026-01-31 09:51:28.591476025 +0000 UTC m=+0.087406933 container exec 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:51:28 compute-0 podman[205979]: 2026-01-31 09:51:28.62751774 +0000 UTC m=+0.123448638 container exec_died 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 09:51:28 compute-0 systemd[1]: libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope: Deactivated successfully.
Jan 31 09:51:28 compute-0 sudo[205976]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:28 compute-0 nova_compute[185194]: 2026-01-31 09:51:28.955 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:28 compute-0 nova_compute[185194]: 2026-01-31 09:51:28.957 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:29 compute-0 sudo[206158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnxenhhqjvtwuzigmhgbmurressxjel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853088.8282156-1156-93866521830097/AnsiballZ_file.py'
Jan 31 09:51:29 compute-0 sudo[206158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:29 compute-0 python3.9[206160]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:29 compute-0 sudo[206158]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.642 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.643 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.643 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.643 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.831 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.832 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5854MB free_disk=72.4094467163086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.833 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.833 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:51:29 compute-0 sudo[206333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knrxrxesvtzbcpavhaasqxsaaglkzglx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853089.5428016-1165-146496083376608/AnsiballZ_podman_container_info.py'
Jan 31 09:51:29 compute-0 sudo[206333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:29 compute-0 podman[206285]: 2026-01-31 09:51:29.849835713 +0000 UTC m=+0.060811196 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute)
Jan 31 09:51:29 compute-0 systemd[1]: 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca-1e04999f4ac0b988.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:51:29 compute-0 systemd[1]: 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca-1e04999f4ac0b988.service: Failed with result 'exit-code'.
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.902 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.903 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:51:29 compute-0 podman[206284]: 2026-01-31 09:51:29.908088767 +0000 UTC m=+0.114689121 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.937 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.952 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.953 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:51:29 compute-0 nova_compute[185194]: 2026-01-31 09:51:29.953 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:51:30 compute-0 python3.9[206344]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 31 09:51:30 compute-0 sudo[206333]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:30 compute-0 sudo[206522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txacnatfkgfonlocvolgpdqkexppltev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853090.255241-1173-185582881305177/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:30 compute-0 sudo[206522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:30 compute-0 python3.9[206524]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:30 compute-0 systemd[1]: Started libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope.
Jan 31 09:51:30 compute-0 podman[206525]: 2026-01-31 09:51:30.825654967 +0000 UTC m=+0.076325679 container exec 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS)
Jan 31 09:51:30 compute-0 podman[206544]: 2026-01-31 09:51:30.885474256 +0000 UTC m=+0.049821383 container exec_died 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:51:30 compute-0 podman[206525]: 2026-01-31 09:51:30.89085556 +0000 UTC m=+0.141526262 container exec_died 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 09:51:30 compute-0 systemd[1]: libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope: Deactivated successfully.
Jan 31 09:51:30 compute-0 sudo[206522]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.952 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.954 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.954 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.967 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.968 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.968 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.969 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:51:30 compute-0 nova_compute[185194]: 2026-01-31 09:51:30.969 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:51:31 compute-0 sudo[206706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afyiltcecwsutxjdratzuabfmhjfwjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853091.0972142-1181-249035657595536/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:31 compute-0 sudo[206706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:31 compute-0 python3.9[206708]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:31 compute-0 systemd[1]: Started libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope.
Jan 31 09:51:31 compute-0 podman[206709]: 2026-01-31 09:51:31.646702892 +0000 UTC m=+0.076794656 container exec 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 31 09:51:31 compute-0 podman[206709]: 2026-01-31 09:51:31.680685186 +0000 UTC m=+0.110776890 container exec_died 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 31 09:51:31 compute-0 systemd[1]: libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope: Deactivated successfully.
Jan 31 09:51:31 compute-0 sudo[206706]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:32 compute-0 sudo[206888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbbfhbtdovasjtpslaiyhmhkuevavtkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853091.8860395-1189-214446036055423/AnsiballZ_file.py'
Jan 31 09:51:32 compute-0 sudo[206888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:32 compute-0 python3.9[206890]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:32 compute-0 sudo[206888]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:32 compute-0 sudo[207040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lslsklxlopjggfntuahxlkunnxoxtkua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853092.6375244-1198-17978053092552/AnsiballZ_podman_container_info.py'
Jan 31 09:51:32 compute-0 sudo[207040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:33 compute-0 python3.9[207042]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 31 09:51:33 compute-0 sudo[207040]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:33 compute-0 sudo[207218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgildbntqndffcwyalbhgtnyjvhcvnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853093.321862-1206-279650672620247/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:33 compute-0 sudo[207218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:33 compute-0 podman[207179]: 2026-01-31 09:51:33.600151778 +0000 UTC m=+0.053006033 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:51:33 compute-0 python3.9[207231]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:33 compute-0 systemd[1]: Started libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope.
Jan 31 09:51:33 compute-0 podman[207232]: 2026-01-31 09:51:33.927414034 +0000 UTC m=+0.097897297 container exec 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 09:51:33 compute-0 podman[207232]: 2026-01-31 09:51:33.963822586 +0000 UTC m=+0.134305879 container exec_died 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 09:51:33 compute-0 systemd[1]: libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope: Deactivated successfully.
Jan 31 09:51:34 compute-0 sudo[207218]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:34 compute-0 sudo[207412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnrwdiygjhtuiufjbwmgfnvedxaykwya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853094.1648765-1214-32112207623140/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:34 compute-0 sudo[207412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:34 compute-0 python3.9[207414]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:34 compute-0 systemd[1]: Started libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope.
Jan 31 09:51:34 compute-0 podman[207415]: 2026-01-31 09:51:34.731369121 +0000 UTC m=+0.091520367 container exec 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 09:51:34 compute-0 podman[207415]: 2026-01-31 09:51:34.765838842 +0000 UTC m=+0.125990108 container exec_died 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 09:51:34 compute-0 systemd[1]: libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope: Deactivated successfully.
Jan 31 09:51:34 compute-0 sudo[207412]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:35 compute-0 sudo[207597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctmrmfiryixtmogjyfttekqgnvzurxen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853094.9708958-1222-197707334953501/AnsiballZ_file.py'
Jan 31 09:51:35 compute-0 sudo[207597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:35 compute-0 python3.9[207599]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:35 compute-0 sudo[207597]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:36 compute-0 sudo[207749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimuwbvtseusbqemdsozslilfpuwcbfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853095.6533058-1231-21798805641634/AnsiballZ_podman_container_info.py'
Jan 31 09:51:36 compute-0 sudo[207749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:36 compute-0 python3.9[207751]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 31 09:51:36 compute-0 sudo[207749]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:36 compute-0 sudo[207915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekagyolvpopbnkewzecgkedehehpmrrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853096.5693293-1239-219827979073481/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:36 compute-0 sudo[207915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:37 compute-0 python3.9[207917]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:37 compute-0 systemd[1]: Started libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope.
Jan 31 09:51:37 compute-0 podman[207918]: 2026-01-31 09:51:37.114663423 +0000 UTC m=+0.072058202 container exec 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:51:37 compute-0 podman[207918]: 2026-01-31 09:51:37.14761067 +0000 UTC m=+0.105005429 container exec_died 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:51:37 compute-0 systemd[1]: libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope: Deactivated successfully.
Jan 31 09:51:37 compute-0 sudo[207915]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:37 compute-0 sudo[208099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glldkuqvodmhdngtaehwfjhlfatmtnju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853097.332042-1247-206767851707411/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:37 compute-0 sudo[208099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:37 compute-0 python3.9[208101]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:37 compute-0 systemd[1]: Started libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope.
Jan 31 09:51:37 compute-0 podman[208102]: 2026-01-31 09:51:37.841602801 +0000 UTC m=+0.074205675 container exec 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:51:37 compute-0 podman[208102]: 2026-01-31 09:51:37.874542118 +0000 UTC m=+0.107145002 container exec_died 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:51:37 compute-0 systemd[1]: libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope: Deactivated successfully.
Jan 31 09:51:37 compute-0 sudo[208099]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:38 compute-0 sudo[208283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvuzawfisriecmzaokregsxeiuhhitp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853098.0500293-1255-30818278975749/AnsiballZ_file.py'
Jan 31 09:51:38 compute-0 sudo[208283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:38 compute-0 python3.9[208285]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:38 compute-0 sudo[208283]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:39 compute-0 sudo[208435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmxnmahlimpxpgjljestbhhxxjysjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853098.7408986-1264-226414751263495/AnsiballZ_podman_container_info.py'
Jan 31 09:51:39 compute-0 sudo[208435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:39 compute-0 python3.9[208437]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 31 09:51:39 compute-0 sudo[208435]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:39 compute-0 sudo[208600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thdhswnwixsoqwvslvnjcrkzhabxtiuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853099.3926268-1272-85348008116415/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:39 compute-0 sudo[208600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:40 compute-0 python3.9[208602]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:40 compute-0 systemd[1]: Started libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope.
Jan 31 09:51:40 compute-0 podman[208603]: 2026-01-31 09:51:40.197187518 +0000 UTC m=+0.078676185 container exec df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 09:51:40 compute-0 podman[208623]: 2026-01-31 09:51:40.266456625 +0000 UTC m=+0.056608389 container exec_died df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, distribution-scope=public, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Jan 31 09:51:40 compute-0 podman[208603]: 2026-01-31 09:51:40.273638928 +0000 UTC m=+0.155127605 container exec_died df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git)
Jan 31 09:51:40 compute-0 systemd[1]: libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope: Deactivated successfully.
Jan 31 09:51:40 compute-0 sudo[208600]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:40 compute-0 sudo[208784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yduqvhxfzyrdzserosbnrqetdqsoalby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853100.5129426-1280-54320090088347/AnsiballZ_podman_container_exec.py'
Jan 31 09:51:40 compute-0 sudo[208784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:40 compute-0 python3.9[208786]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:51:41 compute-0 systemd[1]: Started libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope.
Jan 31 09:51:41 compute-0 podman[208787]: 2026-01-31 09:51:41.077138938 +0000 UTC m=+0.075353264 container exec df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, distribution-scope=public, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1769056855, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc.)
Jan 31 09:51:41 compute-0 podman[208807]: 2026-01-31 09:51:41.143402278 +0000 UTC m=+0.055849998 container exec_died df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.buildah.version=1.33.7, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 09:51:41 compute-0 podman[208787]: 2026-01-31 09:51:41.197258233 +0000 UTC m=+0.195472589 container exec_died df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1769056855, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=9.7, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Jan 31 09:51:41 compute-0 systemd[1]: libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope: Deactivated successfully.
Jan 31 09:51:41 compute-0 sudo[208784]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:41 compute-0 sudo[208969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wspbpqerfeqefyxwbukvxkehzqvyocmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853101.4081776-1288-209394734336607/AnsiballZ_file.py'
Jan 31 09:51:41 compute-0 sudo[208969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:41 compute-0 python3.9[208971]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:41 compute-0 sudo[208969]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:42 compute-0 sudo[209121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vedulggdatsjxgnzsverowqfhlujprkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853102.0603213-1297-274073966590076/AnsiballZ_file.py'
Jan 31 09:51:42 compute-0 sudo[209121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:42 compute-0 python3.9[209123]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:42 compute-0 sudo[209121]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:42 compute-0 sudo[209273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhowhyguiqekmujaypmnrlombvdwudvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853102.6974974-1305-152121285914258/AnsiballZ_stat.py'
Jan 31 09:51:42 compute-0 sudo[209273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:43 compute-0 python3.9[209275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:43 compute-0 sudo[209273]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:43 compute-0 sudo[209396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbfgjgwmwcblqtedkqlctfaunsenzkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853102.6974974-1305-152121285914258/AnsiballZ_copy.py'
Jan 31 09:51:43 compute-0 sudo[209396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:43 compute-0 python3.9[209398]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853102.6974974-1305-152121285914258/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:43 compute-0 sudo[209396]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:44 compute-0 sudo[209548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lovefwfkzznaspoweuyuftvkddaefwxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853104.04216-1321-57455976450697/AnsiballZ_file.py'
Jan 31 09:51:44 compute-0 sudo[209548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:44 compute-0 python3.9[209550]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:44 compute-0 sudo[209548]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:45 compute-0 sudo[209700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvoffbpypvmazkpzdgnvqibzkkpuskmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853104.7784314-1329-189655528848580/AnsiballZ_stat.py'
Jan 31 09:51:45 compute-0 sudo[209700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:45 compute-0 python3.9[209702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:45 compute-0 sudo[209700]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:45 compute-0 sudo[209790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcumstzemnrjtevxlhzssccveoeomsob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853104.7784314-1329-189655528848580/AnsiballZ_file.py'
Jan 31 09:51:45 compute-0 sudo[209790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:45 compute-0 podman[209752]: 2026-01-31 09:51:45.478747714 +0000 UTC m=+0.070237043 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 09:51:45 compute-0 python3.9[209798]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:45 compute-0 sudo[209790]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:46 compute-0 sudo[209954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhrzbuobhdsvljusaawbxidwfnqpqdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853105.8228173-1341-28653726559540/AnsiballZ_stat.py'
Jan 31 09:51:46 compute-0 sudo[209954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:47 compute-0 python3.9[209956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:47 compute-0 sudo[209954]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:47 compute-0 sudo[210032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnorasqafgenagjtxeekxssmtdtihhyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853105.8228173-1341-28653726559540/AnsiballZ_file.py'
Jan 31 09:51:47 compute-0 sudo[210032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:47 compute-0 python3.9[210034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yddgl6c1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:47 compute-0 sudo[210032]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:47 compute-0 sudo[210184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgtwbahxdogxfcduaizxtxhnmagnntgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853107.6120772-1353-195173249400926/AnsiballZ_stat.py'
Jan 31 09:51:47 compute-0 sudo[210184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:48 compute-0 python3.9[210186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:48 compute-0 sudo[210184]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:48 compute-0 sudo[210262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fllzlmbbmylvtplqyvzuvpdanybtccun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853107.6120772-1353-195173249400926/AnsiballZ_file.py'
Jan 31 09:51:48 compute-0 sudo[210262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:48 compute-0 python3.9[210264]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:48 compute-0 sudo[210262]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:48 compute-0 sudo[210414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvmcmyfmlvvluovhesahibelcvcrzgsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853108.589785-1366-15382577788057/AnsiballZ_command.py'
Jan 31 09:51:48 compute-0 sudo[210414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:48 compute-0 python3.9[210416]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:51:48 compute-0 sudo[210414]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:49 compute-0 sudo[210567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhnsefirtvsviqhhqupczcadcyoanebb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853109.147792-1374-31093794119090/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 09:51:49 compute-0 sudo[210567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:49 compute-0 python3[210569]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 09:51:49 compute-0 sudo[210567]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:50 compute-0 sudo[210730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjkbtleibasbabusclrzrgmvsujlhgxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853109.917046-1382-52830207522738/AnsiballZ_stat.py'
Jan 31 09:51:50 compute-0 sudo[210730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:50 compute-0 podman[210693]: 2026-01-31 09:51:50.226370663 +0000 UTC m=+0.048073177 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9)
Jan 31 09:51:50 compute-0 python3.9[210738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:50 compute-0 sudo[210730]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:50 compute-0 sudo[210818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkblpqnrkkrpvfxfhdufeibotznkkns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853109.917046-1382-52830207522738/AnsiballZ_file.py'
Jan 31 09:51:50 compute-0 sudo[210818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:50 compute-0 python3.9[210820]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:50 compute-0 sudo[210818]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:51 compute-0 sudo[210970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqmruardclipqyqtbzckjbtzaapitchy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853110.9877064-1394-74406914665421/AnsiballZ_stat.py'
Jan 31 09:51:51 compute-0 sudo[210970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:51 compute-0 python3.9[210972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:51 compute-0 sudo[210970]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:51 compute-0 sudo[211048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysuvmmiotpunspzujctxifxmtbxkyckg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853110.9877064-1394-74406914665421/AnsiballZ_file.py'
Jan 31 09:51:51 compute-0 sudo[211048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:52 compute-0 python3.9[211050]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:52 compute-0 sudo[211048]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:52 compute-0 sudo[211211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiolmwsrduowrkarcvrspgawkrdxqgzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853112.2450058-1406-145763797195708/AnsiballZ_stat.py'
Jan 31 09:51:52 compute-0 sudo[211211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:52 compute-0 podman[211174]: 2026-01-31 09:51:52.559007592 +0000 UTC m=+0.071870684 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:51:52 compute-0 python3.9[211220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:52 compute-0 sudo[211211]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:52 compute-0 sudo[211298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpermhrfzwhfizsbxtmwtgyxgyjeorxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853112.2450058-1406-145763797195708/AnsiballZ_file.py'
Jan 31 09:51:52 compute-0 sudo[211298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:53 compute-0 python3.9[211300]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:53 compute-0 sudo[211298]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:53 compute-0 sudo[211450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqvrrwloidxhqwygtccpmuonmaitkvfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853113.282354-1418-181639865895518/AnsiballZ_stat.py'
Jan 31 09:51:53 compute-0 sudo[211450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:53 compute-0 python3.9[211452]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:53 compute-0 sudo[211450]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:53 compute-0 sudo[211528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhpzyazkudiebkvruxwyznjwxuktqosx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853113.282354-1418-181639865895518/AnsiballZ_file.py'
Jan 31 09:51:53 compute-0 sudo[211528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:54 compute-0 python3.9[211530]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:54 compute-0 sudo[211528]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:54 compute-0 sudo[211680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipplamezhtpdxquqjgihaspteacpimlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853114.2719615-1430-127648251541029/AnsiballZ_stat.py'
Jan 31 09:51:54 compute-0 sudo[211680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:54 compute-0 python3.9[211682]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:51:54 compute-0 sudo[211680]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:55 compute-0 sudo[211805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmkuxrluwpafxeywvqczqdlxnassyitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853114.2719615-1430-127648251541029/AnsiballZ_copy.py'
Jan 31 09:51:55 compute-0 sudo[211805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:55 compute-0 python3.9[211807]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853114.2719615-1430-127648251541029/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:55 compute-0 sudo[211805]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:55 compute-0 sudo[211957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtakemtxuyfehhcidogcttbdclnmfbyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853115.4255285-1445-10250437819382/AnsiballZ_file.py'
Jan 31 09:51:55 compute-0 sudo[211957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:55 compute-0 python3.9[211959]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:55 compute-0 sudo[211957]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:56 compute-0 sudo[212109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwwwllitrkyvirdvlgnkyfkwckhkqgrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853116.0347495-1453-164133349229474/AnsiballZ_command.py'
Jan 31 09:51:56 compute-0 sudo[212109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:56 compute-0 python3.9[212111]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:51:56 compute-0 sudo[212109]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:57 compute-0 sudo[212264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnzyokrfxibzvsmtbwxumtpwgklyyccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853116.6435118-1461-266826633153874/AnsiballZ_blockinfile.py'
Jan 31 09:51:57 compute-0 sudo[212264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:57 compute-0 python3.9[212266]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:57 compute-0 sudo[212264]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:57 compute-0 sudo[212416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcibfhwmsvxiewigekpaphstfnevjgbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853117.4695747-1470-20249117639654/AnsiballZ_command.py'
Jan 31 09:51:57 compute-0 sudo[212416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:57 compute-0 python3.9[212418]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:51:57 compute-0 sudo[212416]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:58 compute-0 sudo[212569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgsghcnpnohnyaqoyzmxcepksfjxveud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853118.0967638-1478-73046163023848/AnsiballZ_stat.py'
Jan 31 09:51:58 compute-0 sudo[212569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:58 compute-0 python3.9[212571]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:51:58 compute-0 sudo[212569]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:58 compute-0 sudo[212723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfgwshjvxhgyuuptnjvogiveliregvla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853118.668617-1486-240623088538834/AnsiballZ_command.py'
Jan 31 09:51:58 compute-0 sudo[212723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:59 compute-0 python3.9[212725]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:51:59 compute-0 sudo[212723]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:59 compute-0 sudo[212878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgmrtawevgkivpkykuggbqobvuxoiwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853119.2598076-1494-51890941508953/AnsiballZ_file.py'
Jan 31 09:51:59 compute-0 sudo[212878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:51:59 compute-0 python3.9[212880]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:51:59 compute-0 sudo[212878]: pam_unix(sudo:session): session closed for user root
Jan 31 09:51:59 compute-0 podman[201068]: time="2026-01-31T09:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:51:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21453 "" "Go-http-client/1.1"
Jan 31 09:51:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Jan 31 09:52:00 compute-0 sshd-session[185544]: Connection closed by 192.168.122.30 port 37990
Jan 31 09:52:00 compute-0 sshd-session[185541]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:52:00 compute-0 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Jan 31 09:52:00 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 31 09:52:00 compute-0 systemd[1]: session-25.scope: Consumed 1min 36.874s CPU time.
Jan 31 09:52:00 compute-0 systemd-logind[795]: Removed session 25.
Jan 31 09:52:00 compute-0 podman[212908]: 2026-01-31 09:52:00.195259189 +0000 UTC m=+0.043358441 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 09:52:00 compute-0 podman[212907]: 2026-01-31 09:52:00.216879998 +0000 UTC m=+0.065692789 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:52:01 compute-0 openstack_network_exporter[204162]: ERROR   09:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:52:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:52:01 compute-0 openstack_network_exporter[204162]: ERROR   09:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:52:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:52:01 compute-0 anacron[7484]: Job `cron.monthly' started
Jan 31 09:52:01 compute-0 anacron[7484]: Job `cron.monthly' terminated
Jan 31 09:52:01 compute-0 anacron[7484]: Normal exit (3 jobs run)
Jan 31 09:52:03 compute-0 podman[212960]: 2026-01-31 09:52:03.929486992 +0000 UTC m=+0.056493063 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:52:04 compute-0 rsyslogd[1004]: imjournal: 2873 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 31 09:52:06 compute-0 sshd-session[212985]: Accepted publickey for zuul from 192.168.122.30 port 37054 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:52:06 compute-0 systemd-logind[795]: New session 26 of user zuul.
Jan 31 09:52:06 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 31 09:52:06 compute-0 sshd-session[212985]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:52:07 compute-0 sudo[213138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzihnewaoseajtjzecptuaaasjbunoqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853126.5363555-19-218024717926456/AnsiballZ_systemd_service.py'
Jan 31 09:52:07 compute-0 sudo[213138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:07 compute-0 python3.9[213140]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:52:07 compute-0 systemd[1]: Reloading.
Jan 31 09:52:07 compute-0 systemd-rc-local-generator[213166]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:52:07 compute-0 systemd-sysv-generator[213171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:52:07 compute-0 sudo[213138]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:08 compute-0 python3.9[213326]: ansible-ansible.builtin.service_facts Invoked
Jan 31 09:52:08 compute-0 network[213343]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 09:52:08 compute-0 network[213344]: 'network-scripts' will be removed from distribution in near future.
Jan 31 09:52:08 compute-0 network[213345]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 09:52:11 compute-0 sudo[213614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knjuhawbsmvozikwncalythgjtvqmzrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853130.9564693-42-40450258338952/AnsiballZ_systemd_service.py'
Jan 31 09:52:11 compute-0 sudo[213614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:11 compute-0 python3.9[213616]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:52:11 compute-0 sudo[213614]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:12 compute-0 sudo[213767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fijgvwapkjfvjuzucicvaodrhrjzjbxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853131.9251602-52-24905645443716/AnsiballZ_file.py'
Jan 31 09:52:12 compute-0 sudo[213767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:12 compute-0 python3.9[213769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:12 compute-0 sudo[213767]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:12 compute-0 sudo[213919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhfnnkxxmalbbelhmpagrfaucdebqri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853132.6383288-60-95716398284569/AnsiballZ_file.py'
Jan 31 09:52:12 compute-0 sudo[213919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:13 compute-0 python3.9[213921]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:13 compute-0 sudo[213919]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:13 compute-0 sudo[214071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihknrgszoarfvkuonyvwxejbbzsrztit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853133.5158343-69-195992615626680/AnsiballZ_command.py'
Jan 31 09:52:13 compute-0 sudo[214071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:14 compute-0 python3.9[214073]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:52:14 compute-0 sudo[214071]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:14 compute-0 python3.9[214225]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:52:15 compute-0 sudo[214375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emlsvktpmeiqvjvmdikqntchshlmhary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853135.1340215-87-157626879770197/AnsiballZ_systemd_service.py'
Jan 31 09:52:15 compute-0 sudo[214375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:15 compute-0 python3.9[214377]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:52:15 compute-0 systemd[1]: Reloading.
Jan 31 09:52:15 compute-0 podman[214379]: 2026-01-31 09:52:15.824308558 +0000 UTC m=+0.055662558 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:52:15 compute-0 systemd-rc-local-generator[214431]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:52:15 compute-0 systemd-sysv-generator[214434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:52:16 compute-0 sudo[214375]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:52:16.412 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:52:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:52:16.413 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:52:16 compute-0 sudo[214588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzibetjyyofhhyqojjxpzxiygccpletw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853136.1816852-95-117113379282276/AnsiballZ_command.py'
Jan 31 09:52:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:52:16.413 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:52:16 compute-0 sudo[214588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:16 compute-0 python3.9[214590]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:52:16 compute-0 sudo[214588]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:17 compute-0 sudo[214741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kafzljvmwkcrkkelhipbridshrqwtgpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853136.8693762-104-174913649872164/AnsiballZ_file.py'
Jan 31 09:52:17 compute-0 sudo[214741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:17 compute-0 python3.9[214743]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:17 compute-0 sudo[214741]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:18 compute-0 python3.9[214893]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:52:18 compute-0 python3.9[215045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:19 compute-0 python3.9[215166]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853138.4334707-120-36359024624249/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:20 compute-0 python3.9[215316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:20 compute-0 podman[215411]: 2026-01-31 09:52:20.48723741 +0000 UTC m=+0.060370948 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 31 09:52:20 compute-0 python3.9[215451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853139.7165194-135-151336070445860/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:21 compute-0 sudo[215607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikubpzyulbzrysgrwtaejxsddeiwwql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853140.9663918-153-46777942094250/AnsiballZ_getent.py'
Jan 31 09:52:21 compute-0 sudo[215607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:21 compute-0 python3.9[215609]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 31 09:52:21 compute-0 sudo[215607]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:22 compute-0 podman[215734]: 2026-01-31 09:52:22.757985377 +0000 UTC m=+0.050683913 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:52:22 compute-0 python3.9[215771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:23 compute-0 python3.9[215898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769853142.4762604-181-270169708467586/.source.conf _original_basename=ceilometer.conf follow=False checksum=f817847bb0474d7c55a7ad9afdea5f1400a30720 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:23 compute-0 python3.9[216049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:24 compute-0 python3.9[216170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769853143.5283413-181-8600247312750/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:25 compute-0 python3.9[216320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:25 compute-0 python3.9[216441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769853144.5759327-181-161724681806627/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:26 compute-0 python3.9[216591]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:52:27 compute-0 python3.9[216743]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:52:27 compute-0 python3.9[216895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:28 compute-0 python3.9[217016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853147.2699914-240-71212571644951/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:28 compute-0 sudo[217166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmjqxmbkqhosqgeqakfjoochtbmgkzsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853148.3593688-255-222712292200334/AnsiballZ_file.py'
Jan 31 09:52:28 compute-0 sudo[217166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:28 compute-0 python3.9[217168]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:28 compute-0 sudo[217166]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:29 compute-0 sudo[217318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwodmrvxxvukprhhpjqubqcdqnryyjvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853148.9744859-263-216783712678209/AnsiballZ_file.py'
Jan 31 09:52:29 compute-0 sudo[217318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:29 compute-0 python3.9[217320]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:29 compute-0 sudo[217318]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:29 compute-0 nova_compute[185194]: 2026-01-31 09:52:29.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:29 compute-0 nova_compute[185194]: 2026-01-31 09:52:29.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:29 compute-0 podman[201068]: time="2026-01-31T09:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:52:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21451 "" "Go-http-client/1.1"
Jan 31 09:52:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Jan 31 09:52:29 compute-0 sudo[217472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqpmtmcogjotnpdldlorbjcapazujpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853149.5650992-271-269116954357422/AnsiballZ_file.py'
Jan 31 09:52:29 compute-0 sudo[217472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:30 compute-0 python3.9[217474]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:30 compute-0 sudo[217472]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.671 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.671 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.671 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.671 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.693 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.694 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.694 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd942e31e80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.697 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.698 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.701 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.702 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:52:30.703 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:52:30 compute-0 sudo[217648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhdcrmbtaaygfrvtozazqbvwonquffeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853150.2704115-279-198577721395092/AnsiballZ_stat.py'
Jan 31 09:52:30 compute-0 sudo[217648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:30 compute-0 podman[217600]: 2026-01-31 09:52:30.816273828 +0000 UTC m=+0.085680244 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6)
Jan 31 09:52:30 compute-0 podman[217599]: 2026-01-31 09:52:30.834518178 +0000 UTC m=+0.099051640 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.857 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.859 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5811MB free_disk=72.45915603637695GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.859 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.859 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.933 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.933 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.952 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.967 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.968 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:52:30 compute-0 nova_compute[185194]: 2026-01-31 09:52:30.968 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:52:30 compute-0 python3.9[217661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:31 compute-0 sudo[217648]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:31 compute-0 sudo[217793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfwzxcpaadktlkozssiurfkudmrxrlih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853150.2704115-279-198577721395092/AnsiballZ_copy.py'
Jan 31 09:52:31 compute-0 sudo[217793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:31 compute-0 openstack_network_exporter[204162]: ERROR   09:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:52:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:52:31 compute-0 openstack_network_exporter[204162]: ERROR   09:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:52:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:52:31 compute-0 python3.9[217795]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853150.2704115-279-198577721395092/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:31 compute-0 sudo[217793]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:31 compute-0 sudo[217869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvnplkqzpjcighaqgosauraepjrrrgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853150.2704115-279-198577721395092/AnsiballZ_stat.py'
Jan 31 09:52:31 compute-0 sudo[217869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:31 compute-0 nova_compute[185194]: 2026-01-31 09:52:31.968 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:31 compute-0 nova_compute[185194]: 2026-01-31 09:52:31.969 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:31 compute-0 nova_compute[185194]: 2026-01-31 09:52:31.969 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:52:32 compute-0 python3.9[217871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:32 compute-0 sudo[217869]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:32 compute-0 sudo[217992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbvxfbiypvitoktaoiycpwkagopizmwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853150.2704115-279-198577721395092/AnsiballZ_copy.py'
Jan 31 09:52:32 compute-0 sudo[217992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:32 compute-0 nova_compute[185194]: 2026-01-31 09:52:32.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:32 compute-0 nova_compute[185194]: 2026-01-31 09:52:32.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:52:32 compute-0 nova_compute[185194]: 2026-01-31 09:52:32.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:52:32 compute-0 nova_compute[185194]: 2026-01-31 09:52:32.624 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:52:32 compute-0 nova_compute[185194]: 2026-01-31 09:52:32.625 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:52:32 compute-0 python3.9[217994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853150.2704115-279-198577721395092/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:32 compute-0 sudo[217992]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:33 compute-0 sudo[218144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjypbbvyblswlduimxwulskzqexdwxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853152.9817827-279-131481961569852/AnsiballZ_stat.py'
Jan 31 09:52:33 compute-0 sudo[218144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:33 compute-0 python3.9[218146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:33 compute-0 sudo[218144]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:33 compute-0 sudo[218267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtkfqvwlowiplxpdvwgjdvytyacdzazs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853152.9817827-279-131481961569852/AnsiballZ_copy.py'
Jan 31 09:52:33 compute-0 sudo[218267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:33 compute-0 python3.9[218269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769853152.9817827-279-131481961569852/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:33 compute-0 sudo[218267]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:34 compute-0 sudo[218430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceukqydrqbstibofjfgrmsikeevjcjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853154.2601242-321-86421508818686/AnsiballZ_file.py'
Jan 31 09:52:34 compute-0 sudo[218430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:34 compute-0 podman[218393]: 2026-01-31 09:52:34.568934818 +0000 UTC m=+0.085416529 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 09:52:34 compute-0 python3.9[218437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:34 compute-0 sudo[218430]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:35 compute-0 sudo[218595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omuaizdeucyomecpwkmjrnifgtoialet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853155.1676223-329-270823460605806/AnsiballZ_file.py'
Jan 31 09:52:35 compute-0 sudo[218595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:35 compute-0 python3.9[218597]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:35 compute-0 sudo[218595]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:36 compute-0 sudo[218747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmxncprcvecrylekxcstquimzsyrkss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853155.8034542-337-210374118111751/AnsiballZ_stat.py'
Jan 31 09:52:36 compute-0 sudo[218747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:36 compute-0 python3.9[218749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:36 compute-0 sudo[218747]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:36 compute-0 sudo[218870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebczrjylaistdilvcosubjiirihvymf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853155.8034542-337-210374118111751/AnsiballZ_copy.py'
Jan 31 09:52:36 compute-0 sudo[218870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:36 compute-0 python3.9[218872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853155.8034542-337-210374118111751/.source.json _original_basename=.vacb2dqg follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:36 compute-0 sudo[218870]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:37 compute-0 python3.9[219022]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:39 compute-0 sudo[219443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtufdtjkycmaphbvhqonpepvxvbjzbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853159.2814083-377-188343883636147/AnsiballZ_container_config_data.py'
Jan 31 09:52:39 compute-0 sudo[219443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:40 compute-0 python3.9[219445]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 31 09:52:40 compute-0 sudo[219443]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:40 compute-0 sudo[219595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gifdkiscuwobjcbbakeactpwpshgbvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853160.4759135-388-250341801446117/AnsiballZ_container_config_hash.py'
Jan 31 09:52:40 compute-0 sudo[219595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:41 compute-0 python3.9[219597]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:52:41 compute-0 sudo[219595]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:42 compute-0 sudo[219747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zngddmuyztxyapwfmkcrgnmneweswnos ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853161.4836617-398-170247790860988/AnsiballZ_edpm_container_manage.py'
Jan 31 09:52:42 compute-0 sudo[219747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:42 compute-0 python3[219749]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:52:42 compute-0 podman[219787]: 2026-01-31 09:52:42.528859395 +0000 UTC m=+0.024090133 image pull 5a0c248a731dc2e1754b1906fede374f0f92203547e5b10eb435ef1a64b36296 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 31 09:52:43 compute-0 podman[219787]: 2026-01-31 09:52:43.197752389 +0000 UTC m=+0.692983037 container create 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi)
Jan 31 09:52:43 compute-0 python3[219749]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 31 09:52:43 compute-0 sudo[219747]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:43 compute-0 sudo[219974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbovympwgfxajpdppsjrloovjbhurtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853163.6320398-406-242017762875964/AnsiballZ_stat.py'
Jan 31 09:52:43 compute-0 sudo[219974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:44 compute-0 python3.9[219976]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:52:44 compute-0 sudo[219974]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:44 compute-0 sudo[220128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlaysexoumlcxnwzvsjlndrwemygxmsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853164.3898005-415-202623875254878/AnsiballZ_file.py'
Jan 31 09:52:44 compute-0 sudo[220128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:44 compute-0 python3.9[220130]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:44 compute-0 sudo[220128]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:45 compute-0 sudo[220204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxmwicgdhtnadumezzixlblodtnhltlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853164.3898005-415-202623875254878/AnsiballZ_stat.py'
Jan 31 09:52:45 compute-0 sudo[220204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:45 compute-0 python3.9[220206]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:52:45 compute-0 sudo[220204]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:45 compute-0 sudo[220355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajhcjrbvlqhjulfyljqotxbppvfczcro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853165.3733-415-129407696643177/AnsiballZ_copy.py'
Jan 31 09:52:45 compute-0 sudo[220355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:46 compute-0 python3.9[220357]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853165.3733-415-129407696643177/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:46 compute-0 sudo[220355]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:46 compute-0 sudo[220440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-selazwytxtjoculwcpwpwejvxezcalvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853165.3733-415-129407696643177/AnsiballZ_systemd.py'
Jan 31 09:52:46 compute-0 sudo[220440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:46 compute-0 podman[220405]: 2026-01-31 09:52:46.659938219 +0000 UTC m=+0.069817169 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:52:47 compute-0 python3.9[220451]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:52:47 compute-0 systemd[1]: Reloading.
Jan 31 09:52:47 compute-0 systemd-rc-local-generator[220481]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:52:47 compute-0 systemd-sysv-generator[220489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:52:47 compute-0 sudo[220440]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:47 compute-0 sudo[220568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfsbrxqhcxnrorjdtxgcenrxenpywxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853165.3733-415-129407696643177/AnsiballZ_systemd.py'
Jan 31 09:52:47 compute-0 sudo[220568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:47 compute-0 python3.9[220570]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:52:48 compute-0 systemd[1]: Reloading.
Jan 31 09:52:48 compute-0 systemd-sysv-generator[220601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:52:48 compute-0 systemd-rc-local-generator[220595]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:52:48 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 31 09:52:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 31 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 31 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 31 09:52:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.
Jan 31 09:52:49 compute-0 podman[220609]: 2026-01-31 09:52:49.027828298 +0000 UTC m=+0.657397225 container init 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + sudo -E kolla_set_configs
Jan 31 09:52:49 compute-0 podman[220609]: 2026-01-31 09:52:49.066801136 +0000 UTC m=+0.696370023 container start 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 31 09:52:49 compute-0 sudo[220631]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 31 09:52:49 compute-0 sudo[220631]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 09:52:49 compute-0 sudo[220631]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Validating config file
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Copying service configuration files
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: INFO:__main__:Writing out command to execute
Jan 31 09:52:49 compute-0 sudo[220631]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: ++ cat /run_command
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + ARGS=
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + sudo kolla_copy_cacerts
Jan 31 09:52:49 compute-0 podman[220609]: ceilometer_agent_ipmi
Jan 31 09:52:49 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 31 09:52:49 compute-0 sudo[220646]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 31 09:52:49 compute-0 sudo[220646]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 09:52:49 compute-0 sudo[220646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:52:49 compute-0 sudo[220646]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + [[ ! -n '' ]]
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + . kolla_extend_start
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + umask 0022
Jan 31 09:52:49 compute-0 ceilometer_agent_ipmi[220625]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 31 09:52:49 compute-0 podman[220632]: 2026-01-31 09:52:49.249310245 +0000 UTC m=+0.173123110 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 09:52:49 compute-0 systemd[1]: 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882-3a7b39acfc37109a.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:52:49 compute-0 systemd[1]: 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882-3a7b39acfc37109a.service: Failed with result 'exit-code'.
Jan 31 09:52:49 compute-0 sudo[220568]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.238 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.238 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.238 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.238 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.239 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.240 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.241 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.242 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.243 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.244 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.245 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.246 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.250 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.251 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.252 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.253 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.254 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.255 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.256 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.257 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.306 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.307 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.308 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 31 09:52:50 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.379 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmps2m3z986/privsep.sock']
Jan 31 09:52:50 compute-0 python3.9[220806]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:52:50 compute-0 sudo[220811]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmps2m3z986/privsep.sock
Jan 31 09:52:50 compute-0 sudo[220811]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 09:52:50 compute-0 sudo[220811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:52:50 compute-0 podman[220863]: 2026-01-31 09:52:50.959247669 +0000 UTC m=+0.079550986 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1769056855, managed_by=edpm_ansible)
Jan 31 09:52:51 compute-0 sudo[220811]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.005 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.006 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps2m3z986/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.877 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.882 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.886 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:50.886 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 31 09:52:51 compute-0 sudo[220988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgftkayzenoefsfcwnhbvwqnqxzvemx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853170.8386965-460-14270861819725/AnsiballZ_stat.py'
Jan 31 09:52:51 compute-0 sudo[220988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.100 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.101 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.102 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.102 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.103 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.103 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.103 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.103 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.104 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.104 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.104 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.104 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.104 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.109 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.109 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.109 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.109 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.109 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.110 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.110 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.110 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.110 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.110 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.111 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.111 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.111 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.111 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.112 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.112 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.112 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.112 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.112 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.113 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.113 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.113 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.113 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.113 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.114 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.114 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.114 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.114 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.115 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.116 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.116 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.116 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.116 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.116 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.116 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.117 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.117 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.117 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.117 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.117 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.118 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.118 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.118 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.118 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.118 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.119 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.119 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.119 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.119 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.119 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.119 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.120 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.120 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.120 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.120 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.120 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.121 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.122 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.122 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.122 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.122 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.122 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.123 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.123 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.123 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.123 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.123 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.124 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.124 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.124 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.124 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.128 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.129 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.129 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.129 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.129 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.129 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.130 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.131 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.132 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.134 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.135 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.136 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.137 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.138 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.139 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.140 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.141 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.141 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 31 09:52:51 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:52:51.143 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 31 09:52:51 compute-0 python3.9[220990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:52:51 compute-0 sudo[220988]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:51 compute-0 sudo[221115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrnguokflgwrvdzriddjqtigkuawqxke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853170.8386965-460-14270861819725/AnsiballZ_copy.py'
Jan 31 09:52:51 compute-0 sudo[221115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:52 compute-0 python3.9[221117]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853170.8386965-460-14270861819725/.source.yaml _original_basename=.74hazlji follow=False checksum=a1d27d837c6f35dee762f2ec2230cf6884c254ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:52 compute-0 sudo[221115]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:52 compute-0 sudo[221267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbopipvxjvsgkapllhhqowahfrhokcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853172.3296244-477-75438060181719/AnsiballZ_file.py'
Jan 31 09:52:52 compute-0 sudo[221267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:52 compute-0 python3.9[221269]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:52 compute-0 sudo[221267]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:52 compute-0 podman[221270]: 2026-01-31 09:52:52.845441654 +0000 UTC m=+0.056349192 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 09:52:53 compute-0 sudo[221438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcewqddxkxsxwjwrgcmumbnrgbvwftlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853172.9664388-485-14831846498877/AnsiballZ_file.py'
Jan 31 09:52:53 compute-0 sudo[221438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:53 compute-0 python3.9[221440]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 09:52:53 compute-0 sudo[221438]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:54 compute-0 python3.9[221590]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:52:55 compute-0 sudo[222011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxgtetjnoqekeyojofqhczilzmstvge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853175.6028984-519-163686142582246/AnsiballZ_container_config_data.py'
Jan 31 09:52:55 compute-0 sudo[222011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:56 compute-0 python3.9[222013]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 31 09:52:56 compute-0 sudo[222011]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:56 compute-0 sudo[222163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srnfrwrzzkubgoktslxodvjvmloxxqtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853176.4845545-530-206474516847068/AnsiballZ_container_config_hash.py'
Jan 31 09:52:56 compute-0 sudo[222163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:56 compute-0 python3.9[222165]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 09:52:56 compute-0 sudo[222163]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:57 compute-0 sudo[222315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sylhsswkfpbugihfxknkupowcnsqxzvt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853177.2409923-540-74886191681373/AnsiballZ_edpm_container_manage.py'
Jan 31 09:52:57 compute-0 sudo[222315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:57 compute-0 python3[222317]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 09:52:58 compute-0 podman[222352]: 2026-01-31 09:52:57.970836642 +0000 UTC m=+0.018835402 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 31 09:52:58 compute-0 podman[222352]: 2026-01-31 09:52:58.778793653 +0000 UTC m=+0.826792393 container create 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=kepler, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 09:52:58 compute-0 python3[222317]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 31 09:52:58 compute-0 sudo[222315]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:59 compute-0 sudo[222540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coiiyslwpqogkbymtofdwqabltvvzhka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853179.1055496-548-145259889504002/AnsiballZ_stat.py'
Jan 31 09:52:59 compute-0 sudo[222540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:52:59 compute-0 python3.9[222542]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:52:59 compute-0 sudo[222540]: pam_unix(sudo:session): session closed for user root
Jan 31 09:52:59 compute-0 podman[201068]: time="2026-01-31T09:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:52:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27464 "" "Go-http-client/1.1"
Jan 31 09:52:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3449 "" "Go-http-client/1.1"
Jan 31 09:53:00 compute-0 sudo[222694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcblebzbpdmspaofpibfuqavypvsvdyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853179.8854172-557-5066168567485/AnsiballZ_file.py'
Jan 31 09:53:00 compute-0 sudo[222694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:00 compute-0 python3.9[222696]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:00 compute-0 sudo[222694]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:00 compute-0 sudo[222770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aksoyjoexstwzapasxddqqyhqbckcntl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853179.8854172-557-5066168567485/AnsiballZ_stat.py'
Jan 31 09:53:00 compute-0 sudo[222770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:00 compute-0 python3.9[222772]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:53:00 compute-0 podman[222774]: 2026-01-31 09:53:00.968054358 +0000 UTC m=+0.090935444 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute)
Jan 31 09:53:00 compute-0 podman[222773]: 2026-01-31 09:53:00.96810905 +0000 UTC m=+0.091665359 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:53:00 compute-0 sudo[222770]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:01 compute-0 sudo[222966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhbgkwoiqzuxnobwvlwixxaxzphbbhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853181.0290687-557-247506205770707/AnsiballZ_copy.py'
Jan 31 09:53:01 compute-0 sudo[222966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:01 compute-0 openstack_network_exporter[204162]: ERROR   09:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:53:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:53:01 compute-0 openstack_network_exporter[204162]: ERROR   09:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:53:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:53:01 compute-0 python3.9[222968]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769853181.0290687-557-247506205770707/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:01 compute-0 sudo[222966]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:01 compute-0 sudo[223042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nggpdiukrkmyxtozktztrhjxmqzjpxcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853181.0290687-557-247506205770707/AnsiballZ_systemd.py'
Jan 31 09:53:01 compute-0 sudo[223042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:02 compute-0 python3.9[223044]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 09:53:02 compute-0 systemd[1]: Reloading.
Jan 31 09:53:02 compute-0 systemd-rc-local-generator[223069]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:53:02 compute-0 systemd-sysv-generator[223072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:53:02 compute-0 sudo[223042]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:02 compute-0 sudo[223153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xseylyxoywffutqynjqnvnlykfnriaci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853181.0290687-557-247506205770707/AnsiballZ_systemd.py'
Jan 31 09:53:02 compute-0 sudo[223153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:03 compute-0 python3.9[223155]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 09:53:03 compute-0 systemd[1]: Reloading.
Jan 31 09:53:03 compute-0 systemd-rc-local-generator[223183]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 09:53:03 compute-0 systemd-sysv-generator[223186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 09:53:03 compute-0 systemd[1]: Starting kepler container...
Jan 31 09:53:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:53:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.
Jan 31 09:53:03 compute-0 podman[223195]: 2026-01-31 09:53:03.881791383 +0000 UTC m=+0.399229415 container init 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, distribution-scope=public, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, container_name=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Jan 31 09:53:03 compute-0 kepler[223211]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 31 09:53:03 compute-0 podman[223195]: 2026-01-31 09:53:03.914946129 +0000 UTC m=+0.432384131 container start 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, container_name=kepler, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.920060       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.920538       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.920554       1 config.go:295] kernel version: 5.14
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.921088       1 power.go:78] Unable to obtain power, use estimate method
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.921106       1 redfish.go:169] failed to get redfish credential file path
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.921346       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.921357       1 power.go:79] using none to obtain power
Jan 31 09:53:03 compute-0 kepler[223211]: E0131 09:53:03.921367       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 31 09:53:03 compute-0 kepler[223211]: E0131 09:53:03.921388       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 31 09:53:03 compute-0 kepler[223211]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 31 09:53:03 compute-0 kepler[223211]: I0131 09:53:03.922590       1 exporter.go:84] Number of CPUs: 8
Jan 31 09:53:03 compute-0 podman[223195]: kepler
Jan 31 09:53:03 compute-0 systemd[1]: Started kepler container.
Jan 31 09:53:04 compute-0 podman[223221]: 2026-01-31 09:53:04.030323341 +0000 UTC m=+0.105262809 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, release=1214.1726694543, version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, config_id=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 31 09:53:04 compute-0 systemd[1]: 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60-6ac54e61115fde8d.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:53:04 compute-0 systemd[1]: 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60-6ac54e61115fde8d.service: Failed with result 'exit-code'.
Jan 31 09:53:04 compute-0 sudo[223153]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.429020       1 watcher.go:83] Using in cluster k8s config
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.429067       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 31 09:53:04 compute-0 kepler[223211]: E0131 09:53:04.429143       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.432430       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.432464       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.435516       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.435543       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.441082       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.441112       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.441123       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446662       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446691       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446694       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446698       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446702       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446710       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446767       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446787       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446803       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446816       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.446876       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 31 09:53:04 compute-0 kepler[223211]: I0131 09:53:04.447944       1 exporter.go:208] Started Kepler in 528.205218ms
Jan 31 09:53:04 compute-0 python3.9[223402]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 09:53:04 compute-0 podman[223427]: 2026-01-31 09:53:04.948657978 +0000 UTC m=+0.076113891 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 09:53:05 compute-0 sudo[223576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olmpalhkqueuycfsxpvijtvbpkwggxcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853185.3315945-602-59083561802172/AnsiballZ_stat.py'
Jan 31 09:53:05 compute-0 sudo[223576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:06 compute-0 python3.9[223578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:06 compute-0 sudo[223576]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:06 compute-0 sudo[223701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccrkqtuxvsdcqaemqjnguyliycovdef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853185.3315945-602-59083561802172/AnsiballZ_copy.py'
Jan 31 09:53:06 compute-0 sudo[223701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:06 compute-0 python3.9[223703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853185.3315945-602-59083561802172/.source.yaml _original_basename=.3ajtui_m follow=False checksum=705f4ab8c23b2414ede8ac2dcaf6e618a193ba72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:06 compute-0 sudo[223701]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:07 compute-0 sudo[223853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzhayztdqjduzkozafglglyazytynxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853186.861538-617-161644314592941/AnsiballZ_systemd.py'
Jan 31 09:53:07 compute-0 sudo[223853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:07 compute-0 python3.9[223855]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:53:07 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 31 09:53:07 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:53:07.810 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 31 09:53:07 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:53:07.912 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 31 09:53:07 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:53:07.913 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 31 09:53:07 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:53:07.913 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 31 09:53:07 compute-0 ceilometer_agent_ipmi[220625]: 2026-01-31 09:53:07.920 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 31 09:53:08 compute-0 systemd[1]: libpod-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.scope: Deactivated successfully.
Jan 31 09:53:08 compute-0 systemd[1]: libpod-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.scope: Consumed 2.004s CPU time.
Jan 31 09:53:08 compute-0 podman[223862]: 2026-01-31 09:53:08.07395728 +0000 UTC m=+0.461331446 container died 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127)
Jan 31 09:53:08 compute-0 systemd[1]: 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882-3a7b39acfc37109a.timer: Deactivated successfully.
Jan 31 09:53:08 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.
Jan 31 09:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882-userdata-shm.mount: Deactivated successfully.
Jan 31 09:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89-merged.mount: Deactivated successfully.
Jan 31 09:53:08 compute-0 podman[223862]: 2026-01-31 09:53:08.227175009 +0000 UTC m=+0.614549155 container cleanup 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi)
Jan 31 09:53:08 compute-0 podman[223862]: ceilometer_agent_ipmi
Jan 31 09:53:08 compute-0 podman[223892]: ceilometer_agent_ipmi
Jan 31 09:53:08 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 31 09:53:08 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 31 09:53:08 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 31 09:53:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 31 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 31 09:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47f6aad6d8f8ac3bfbd4ae0a1654c3b9733a3307807249e171e82b7130a4a89/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 31 09:53:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.
Jan 31 09:53:08 compute-0 podman[223904]: 2026-01-31 09:53:08.467209862 +0000 UTC m=+0.133203762 container init 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + sudo -E kolla_set_configs
Jan 31 09:53:08 compute-0 sudo[223926]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 31 09:53:08 compute-0 sudo[223926]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 09:53:08 compute-0 sudo[223926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:53:08 compute-0 podman[223904]: 2026-01-31 09:53:08.494554477 +0000 UTC m=+0.160548387 container start 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:53:08 compute-0 podman[223904]: ceilometer_agent_ipmi
Jan 31 09:53:08 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 31 09:53:08 compute-0 sudo[223853]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Validating config file
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Copying service configuration files
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: INFO:__main__:Writing out command to execute
Jan 31 09:53:08 compute-0 sudo[223926]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: ++ cat /run_command
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + ARGS=
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + sudo kolla_copy_cacerts
Jan 31 09:53:08 compute-0 sudo[223949]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 31 09:53:08 compute-0 sudo[223949]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 09:53:08 compute-0 sudo[223949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:53:08 compute-0 sudo[223949]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + [[ ! -n '' ]]
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + . kolla_extend_start
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + umask 0022
Jan 31 09:53:08 compute-0 ceilometer_agent_ipmi[223920]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 31 09:53:08 compute-0 podman[223927]: 2026-01-31 09:53:08.578396285 +0000 UTC m=+0.075942069 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:53:08 compute-0 systemd[1]: 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882-abb9363aae6991c.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:53:08 compute-0 systemd[1]: 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882-abb9363aae6991c.service: Failed with result 'exit-code'.
Jan 31 09:53:09 compute-0 sudo[224101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzfqzcgvxbqzloqnsmjfahytwhgmkkhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853188.7612808-625-245614916296757/AnsiballZ_systemd.py'
Jan 31 09:53:09 compute-0 sudo[224101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.350 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.350 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.350 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.350 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.350 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.351 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.352 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.353 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.354 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.355 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.356 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.357 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.360 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.361 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.362 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.363 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.364 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.365 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.366 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.367 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.368 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.369 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.388 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.389 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.390 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.400 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmprfgdvkdi/privsep.sock']
Jan 31 09:53:09 compute-0 sudo[224108]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmprfgdvkdi/privsep.sock
Jan 31 09:53:09 compute-0 sudo[224108]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 09:53:09 compute-0 sudo[224108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 31 09:53:09 compute-0 python3.9[224103]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:53:09 compute-0 systemd[1]: Stopping kepler container...
Jan 31 09:53:09 compute-0 sudo[224108]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.995 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.996 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprfgdvkdi/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.888 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:53:09 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.891 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.893 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:09.893 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 31 09:53:10 compute-0 kepler[223211]: I0131 09:53:10.034325       1 exporter.go:218] Received shutdown signal
Jan 31 09:53:10 compute-0 kepler[223211]: I0131 09:53:10.035208       1 exporter.go:226] Exiting...
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.106 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.106 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.107 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.107 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.107 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.108 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.111 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.111 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.111 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.111 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.111 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.111 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.112 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.113 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.114 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.114 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.114 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.115 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.116 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.117 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.118 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.119 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.120 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.121 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.122 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.123 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.124 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.125 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.126 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.127 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.128 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.129 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.130 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.131 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.132 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 31 09:53:10 compute-0 ceilometer_agent_ipmi[223920]: 2026-01-31 09:53:10.136 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 31 09:53:10 compute-0 systemd[1]: libpod-2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.scope: Deactivated successfully.
Jan 31 09:53:10 compute-0 podman[224114]: 2026-01-31 09:53:10.203802204 +0000 UTC m=+0.677848301 container died 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, architecture=x86_64, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.buildah.version=1.29.0, distribution-scope=public, managed_by=edpm_ansible, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 31 09:53:10 compute-0 systemd[1]: 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60-6ac54e61115fde8d.timer: Deactivated successfully.
Jan 31 09:53:10 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.
Jan 31 09:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60-userdata-shm.mount: Deactivated successfully.
Jan 31 09:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-121a60847a2a52950e2e4a367cb1fb86487d903be52a0c2b4276c824fb59339a-merged.mount: Deactivated successfully.
Jan 31 09:53:10 compute-0 podman[224114]: 2026-01-31 09:53:10.244356014 +0000 UTC m=+0.718402101 container cleanup 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, architecture=x86_64, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9)
Jan 31 09:53:10 compute-0 podman[224114]: kepler
Jan 31 09:53:10 compute-0 podman[224146]: kepler
Jan 31 09:53:10 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 31 09:53:10 compute-0 systemd[1]: Stopped kepler container.
Jan 31 09:53:10 compute-0 systemd[1]: Starting kepler container...
Jan 31 09:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:53:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.
Jan 31 09:53:10 compute-0 podman[224159]: 2026-01-31 09:53:10.428636833 +0000 UTC m=+0.106325821 container init 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 09:53:10 compute-0 kepler[224175]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.452247       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.452378       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.452413       1 config.go:295] kernel version: 5.14
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.452849       1 power.go:78] Unable to obtain power, use estimate method
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.452872       1 redfish.go:169] failed to get redfish credential file path
Jan 31 09:53:10 compute-0 podman[224159]: 2026-01-31 09:53:10.452813657 +0000 UTC m=+0.130502645 container start 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4)
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.453149       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.453166       1 power.go:79] using none to obtain power
Jan 31 09:53:10 compute-0 kepler[224175]: E0131 09:53:10.453179       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 31 09:53:10 compute-0 kepler[224175]: E0131 09:53:10.453198       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 31 09:53:10 compute-0 kepler[224175]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.454650       1 exporter.go:84] Number of CPUs: 8
Jan 31 09:53:10 compute-0 podman[224159]: kepler
Jan 31 09:53:10 compute-0 systemd[1]: Started kepler container.
Jan 31 09:53:10 compute-0 sudo[224101]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:10 compute-0 podman[224185]: 2026-01-31 09:53:10.518763541 +0000 UTC m=+0.054940386 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, architecture=x86_64)
Jan 31 09:53:10 compute-0 systemd[1]: 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60-1b4a7ca8824d607a.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 09:53:10 compute-0 systemd[1]: 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60-1b4a7ca8824d607a.service: Failed with result 'exit-code'.
Jan 31 09:53:10 compute-0 sudo[224358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfctrlgsxupkqfgxlixmvggkamfhrnfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853190.625525-633-134988798015055/AnsiballZ_find.py'
Jan 31 09:53:10 compute-0 sudo[224358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.954852       1 watcher.go:83] Using in cluster k8s config
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.954919       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 31 09:53:10 compute-0 kepler[224175]: E0131 09:53:10.955034       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.960019       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.960088       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.964254       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.964286       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.970770       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.970817       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.970839       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977107       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977144       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977148       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977152       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977159       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977174       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.977313       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.978075       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.978127       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.978172       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.978748       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 31 09:53:10 compute-0 kepler[224175]: I0131 09:53:10.979291       1 exporter.go:208] Started Kepler in 527.224556ms
Jan 31 09:53:11 compute-0 python3.9[224360]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 09:53:11 compute-0 sudo[224358]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:11 compute-0 sudo[224520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsesfuqftfntntfuleccshnhpkmeazbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853191.4768882-643-100938865089872/AnsiballZ_podman_container_info.py'
Jan 31 09:53:12 compute-0 sudo[224520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:12 compute-0 python3.9[224522]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 31 09:53:12 compute-0 sudo[224520]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:13 compute-0 sudo[224685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnrsciclmiufyostdewlnzpnzhcczzlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853192.492623-651-23598053697167/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:13 compute-0 sudo[224685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:13 compute-0 python3.9[224687]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:13 compute-0 systemd[1]: Started libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope.
Jan 31 09:53:13 compute-0 podman[224688]: 2026-01-31 09:53:13.380748227 +0000 UTC m=+0.120151089 container exec 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:53:13 compute-0 podman[224688]: 2026-01-31 09:53:13.417066441 +0000 UTC m=+0.156469253 container exec_died 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:53:13 compute-0 sudo[224685]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:13 compute-0 systemd[1]: libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope: Deactivated successfully.
Jan 31 09:53:14 compute-0 sudo[224865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvbncxehcmyogpzuamnoxcxtwlhemdrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853193.6829453-659-244603657856010/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:14 compute-0 sudo[224865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:14 compute-0 python3.9[224867]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:14 compute-0 systemd[1]: Started libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope.
Jan 31 09:53:14 compute-0 podman[224868]: 2026-01-31 09:53:14.461981543 +0000 UTC m=+0.120454464 container exec 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:53:14 compute-0 podman[224868]: 2026-01-31 09:53:14.49787497 +0000 UTC m=+0.156347861 container exec_died 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:53:14 compute-0 systemd[1]: libpod-conmon-57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab.scope: Deactivated successfully.
Jan 31 09:53:14 compute-0 sudo[224865]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:15 compute-0 sudo[225046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urbcivhjoetdorpyfruolfwmbnykxazb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853194.747916-667-113308408089603/AnsiballZ_file.py'
Jan 31 09:53:15 compute-0 sudo[225046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:15 compute-0 python3.9[225048]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:15 compute-0 sudo[225046]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:15 compute-0 sudo[225198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxtzbgsvwoemgtbyhovrkcigpgymuxyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853195.6127057-676-189390458251930/AnsiballZ_podman_container_info.py'
Jan 31 09:53:15 compute-0 sudo[225198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:16 compute-0 python3.9[225200]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 31 09:53:16 compute-0 sudo[225198]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:53:16.414 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:53:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:53:16.415 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:53:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:53:16.415 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:53:16 compute-0 sudo[225378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatvnrbzppftkhchubovlfemcowmbeam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853196.4641557-684-178542472167961/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:16 compute-0 sudo[225378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:16 compute-0 podman[225336]: 2026-01-31 09:53:16.905864632 +0000 UTC m=+0.125219451 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 09:53:17 compute-0 python3.9[225387]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:17 compute-0 systemd[1]: Started libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope.
Jan 31 09:53:17 compute-0 podman[225390]: 2026-01-31 09:53:17.196510278 +0000 UTC m=+0.098955921 container exec 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:53:17 compute-0 podman[225390]: 2026-01-31 09:53:17.228183597 +0000 UTC m=+0.130629260 container exec_died 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 09:53:17 compute-0 systemd[1]: libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope: Deactivated successfully.
Jan 31 09:53:17 compute-0 sudo[225378]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:18 compute-0 sudo[225568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisybdoqpokadznbpqtzmcslmcodkots ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853197.6842055-692-107240728688807/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:18 compute-0 sudo[225568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:18 compute-0 python3.9[225570]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:18 compute-0 systemd[1]: Started libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope.
Jan 31 09:53:18 compute-0 podman[225571]: 2026-01-31 09:53:18.378500167 +0000 UTC m=+0.095814503 container exec 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:53:18 compute-0 podman[225571]: 2026-01-31 09:53:18.410766017 +0000 UTC m=+0.128080353 container exec_died 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:53:18 compute-0 sudo[225568]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:18 compute-0 systemd[1]: libpod-conmon-1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480.scope: Deactivated successfully.
Jan 31 09:53:19 compute-0 sudo[225751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyprrwcdcmukuayztkcpvsaagttwiwug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853198.6640027-700-200396568886075/AnsiballZ_file.py'
Jan 31 09:53:19 compute-0 sudo[225751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:19 compute-0 python3.9[225753]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:19 compute-0 sudo[225751]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:19 compute-0 sudo[225904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwdbxwvijecsckyoyqyjhmckdiqqhmpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853199.5068803-709-177883870011208/AnsiballZ_podman_container_info.py'
Jan 31 09:53:19 compute-0 sudo[225904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:20 compute-0 python3.9[225906]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 31 09:53:20 compute-0 sudo[225904]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:20 compute-0 sudo[226072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sycsyrynusimzpvyrhblzhcttljcuiio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853200.450279-717-163830660269851/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:20 compute-0 sudo[226072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:21 compute-0 python3.9[226074]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:21 compute-0 systemd[1]: Started libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope.
Jan 31 09:53:21 compute-0 podman[226075]: 2026-01-31 09:53:21.139567138 +0000 UTC m=+0.113689391 container exec 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute)
Jan 31 09:53:21 compute-0 podman[226075]: 2026-01-31 09:53:21.173915766 +0000 UTC m=+0.148037989 container exec_died 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, io.buildah.version=1.41.4)
Jan 31 09:53:21 compute-0 sudo[226072]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:21 compute-0 systemd[1]: libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope: Deactivated successfully.
Jan 31 09:53:21 compute-0 podman[226090]: 2026-01-31 09:53:21.264614815 +0000 UTC m=+0.128694735 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, config_id=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7)
Jan 31 09:53:21 compute-0 sudo[226276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqufjrqwxznjktfbvfmwpfhlohbjwfqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853201.411588-725-7384713211435/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:21 compute-0 sudo[226276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:21 compute-0 python3.9[226278]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:22 compute-0 systemd[1]: Started libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope.
Jan 31 09:53:22 compute-0 podman[226279]: 2026-01-31 09:53:22.155737694 +0000 UTC m=+0.163809217 container exec 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.build-date=20260126)
Jan 31 09:53:22 compute-0 podman[226279]: 2026-01-31 09:53:22.18940152 +0000 UTC m=+0.197473013 container exec_died 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 09:53:22 compute-0 systemd[1]: libpod-conmon-5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca.scope: Deactivated successfully.
Jan 31 09:53:22 compute-0 sudo[226276]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:22 compute-0 sudo[226458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithrfbzbnfeitgkovacvaeteqtehmapl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853202.4464526-733-171791038166868/AnsiballZ_file.py'
Jan 31 09:53:22 compute-0 sudo[226458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:22 compute-0 python3.9[226460]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:23 compute-0 sudo[226458]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:23 compute-0 podman[226461]: 2026-01-31 09:53:23.115736972 +0000 UTC m=+0.091690468 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:53:23 compute-0 sudo[226629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdzgsgqbaehruacdqgyekachopnvrlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853203.2601142-742-65819320090584/AnsiballZ_podman_container_info.py'
Jan 31 09:53:23 compute-0 sudo[226629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:23 compute-0 python3.9[226631]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 31 09:53:23 compute-0 sudo[226629]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:24 compute-0 sudo[226793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcpqxdgpheeybwovvrdhunalnvrwmfaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853204.0870183-750-272684286282492/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:24 compute-0 sudo[226793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:24 compute-0 python3.9[226795]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:24 compute-0 systemd[1]: Started libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope.
Jan 31 09:53:24 compute-0 podman[226796]: 2026-01-31 09:53:24.765738262 +0000 UTC m=+0.108619848 container exec 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 09:53:24 compute-0 podman[226796]: 2026-01-31 09:53:24.801041117 +0000 UTC m=+0.143922703 container exec_died 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 09:53:24 compute-0 sudo[226793]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:24 compute-0 systemd[1]: libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope: Deactivated successfully.
Jan 31 09:53:25 compute-0 sudo[226973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yespzzkadtsainvhkhjobbjlujwobgke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853205.0245924-758-116615145898303/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:25 compute-0 sudo[226973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:25 compute-0 python3.9[226975]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:25 compute-0 systemd[1]: Started libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope.
Jan 31 09:53:25 compute-0 podman[226976]: 2026-01-31 09:53:25.720155998 +0000 UTC m=+0.124426007 container exec 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 09:53:25 compute-0 podman[226976]: 2026-01-31 09:53:25.753757903 +0000 UTC m=+0.158027942 container exec_died 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:53:25 compute-0 systemd[1]: libpod-conmon-7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593.scope: Deactivated successfully.
Jan 31 09:53:25 compute-0 sudo[226973]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:26 compute-0 sudo[227157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdvdxacatmdlzbhbyskilxdtpmtgguz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853206.0061522-766-257595333411545/AnsiballZ_file.py'
Jan 31 09:53:26 compute-0 sudo[227157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:26 compute-0 python3.9[227159]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:26 compute-0 sudo[227157]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:27 compute-0 sudo[227309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnohelmqpufemolfoamkdmoiuhinobzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853206.9967425-775-37544104611983/AnsiballZ_podman_container_info.py'
Jan 31 09:53:27 compute-0 sudo[227309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:27 compute-0 python3.9[227311]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 31 09:53:27 compute-0 sudo[227309]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:28 compute-0 sudo[227473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvsfdaooalxhtsnzgdxuhaltboqlalnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853207.8035505-783-247738507797821/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:28 compute-0 sudo[227473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:28 compute-0 python3.9[227475]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:28 compute-0 systemd[1]: Started libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope.
Jan 31 09:53:28 compute-0 podman[227476]: 2026-01-31 09:53:28.502428016 +0000 UTC m=+0.100612051 container exec 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:53:28 compute-0 podman[227476]: 2026-01-31 09:53:28.53651282 +0000 UTC m=+0.134696785 container exec_died 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 09:53:28 compute-0 systemd[1]: libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope: Deactivated successfully.
Jan 31 09:53:28 compute-0 sudo[227473]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:29 compute-0 sudo[227654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhlxmqixireqaahlwonpkggyigoejrbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853208.801329-791-156970644879205/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:29 compute-0 sudo[227654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:29 compute-0 python3.9[227656]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:29 compute-0 systemd[1]: Started libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope.
Jan 31 09:53:29 compute-0 podman[227657]: 2026-01-31 09:53:29.556600278 +0000 UTC m=+0.117122243 container exec 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:53:29 compute-0 podman[227657]: 2026-01-31 09:53:29.59938459 +0000 UTC m=+0.159906545 container exec_died 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:53:29 compute-0 nova_compute[185194]: 2026-01-31 09:53:29.618 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:29 compute-0 systemd[1]: libpod-conmon-041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e.scope: Deactivated successfully.
Jan 31 09:53:29 compute-0 sudo[227654]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:29 compute-0 podman[201068]: time="2026-01-31T09:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:53:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27471 "" "Go-http-client/1.1"
Jan 31 09:53:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3865 "" "Go-http-client/1.1"
Jan 31 09:53:30 compute-0 sudo[227833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhqkbleywckmipduiozichouniasiwxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853209.8487484-799-50681795326974/AnsiballZ_file.py'
Jan 31 09:53:30 compute-0 sudo[227833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:30 compute-0 python3.9[227835]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:30 compute-0 sudo[227833]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:31 compute-0 sudo[227985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvtkoswwlcevkumojgoihiftmbsjkflz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853210.6521153-808-139024203046125/AnsiballZ_podman_container_info.py'
Jan 31 09:53:31 compute-0 sudo[227985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:31 compute-0 podman[227988]: 2026-01-31 09:53:31.182532836 +0000 UTC m=+0.131751981 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true)
Jan 31 09:53:31 compute-0 podman[227987]: 2026-01-31 09:53:31.217856292 +0000 UTC m=+0.168650235 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:53:31 compute-0 python3.9[227989]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 31 09:53:31 compute-0 sudo[227985]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:31 compute-0 openstack_network_exporter[204162]: ERROR   09:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:53:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:53:31 compute-0 openstack_network_exporter[204162]: ERROR   09:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:53:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:53:31 compute-0 nova_compute[185194]: 2026-01-31 09:53:31.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:31 compute-0 nova_compute[185194]: 2026-01-31 09:53:31.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:31 compute-0 nova_compute[185194]: 2026-01-31 09:53:31.730 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:53:31 compute-0 nova_compute[185194]: 2026-01-31 09:53:31.730 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:53:31 compute-0 nova_compute[185194]: 2026-01-31 09:53:31.730 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:53:31 compute-0 nova_compute[185194]: 2026-01-31 09:53:31.731 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.065 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.068 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5648MB free_disk=72.45740509033203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.068 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.069 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:53:32 compute-0 sudo[228195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeclalwmiomnwvnzrvkrktcfqcoxhlmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853211.7958176-816-84222515232957/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:32 compute-0 sudo[228195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.144 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.145 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.167 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.186 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.190 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:53:32 compute-0 nova_compute[185194]: 2026-01-31 09:53:32.190 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:53:32 compute-0 python3.9[228197]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:32 compute-0 systemd[1]: Started libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope.
Jan 31 09:53:32 compute-0 podman[228198]: 2026-01-31 09:53:32.464360591 +0000 UTC m=+0.095185572 container exec df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-22T05:09:47Z, distribution-scope=public, version=9.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Jan 31 09:53:32 compute-0 podman[228198]: 2026-01-31 09:53:32.476092526 +0000 UTC m=+0.106917497 container exec_died df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-type=git, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Jan 31 09:53:32 compute-0 sudo[228195]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:32 compute-0 systemd[1]: libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope: Deactivated successfully.
Jan 31 09:53:33 compute-0 sudo[228376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzlstgyiwbzgigcugtagqvhxqqfkfnns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853212.7205634-824-165424126437291/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:33 compute-0 sudo[228376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.186 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.206 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.207 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.207 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:33 compute-0 python3.9[228378]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:33 compute-0 systemd[1]: Started libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope.
Jan 31 09:53:33 compute-0 podman[228379]: 2026-01-31 09:53:33.385289646 +0000 UTC m=+0.103688208 container exec df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1769056855, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter)
Jan 31 09:53:33 compute-0 podman[228379]: 2026-01-31 09:53:33.418802619 +0000 UTC m=+0.137201151 container exec_died df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-type=git, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc.)
Jan 31 09:53:33 compute-0 systemd[1]: libpod-conmon-df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044.scope: Deactivated successfully.
Jan 31 09:53:33 compute-0 sudo[228376]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.604 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.604 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.622 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.622 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:33 compute-0 nova_compute[185194]: 2026-01-31 09:53:33.622 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:53:33 compute-0 sudo[228558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgymexrqulkubrmrtklqohcfdadhtjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853213.6586835-832-223931939333401/AnsiballZ_file.py'
Jan 31 09:53:33 compute-0 sudo[228558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:34 compute-0 python3.9[228560]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:34 compute-0 sudo[228558]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:34 compute-0 nova_compute[185194]: 2026-01-31 09:53:34.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:53:34 compute-0 sudo[228710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyynrioqzjncklubjhtrvnplxcgwqmfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853214.3670168-841-84402557826063/AnsiballZ_podman_container_info.py'
Jan 31 09:53:34 compute-0 sudo[228710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:34 compute-0 python3.9[228712]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 31 09:53:34 compute-0 sudo[228710]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:35 compute-0 sudo[228891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrabfepskiedywtlqvxxroztqzjaywm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853215.1680532-849-4381048619483/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:35 compute-0 sudo[228891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:35 compute-0 podman[228848]: 2026-01-31 09:53:35.534930263 +0000 UTC m=+0.094834746 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 09:53:35 compute-0 python3.9[228900]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:35 compute-0 systemd[1]: Started libpod-conmon-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.scope.
Jan 31 09:53:35 compute-0 podman[228901]: 2026-01-31 09:53:35.891184559 +0000 UTC m=+0.103591206 container exec 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 31 09:53:35 compute-0 podman[228901]: 2026-01-31 09:53:35.926884272 +0000 UTC m=+0.139290859 container exec_died 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 31 09:53:35 compute-0 systemd[1]: libpod-conmon-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.scope: Deactivated successfully.
Jan 31 09:53:35 compute-0 sudo[228891]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:36 compute-0 sudo[229081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbdhdttsqjbqhldgszjpuhnwlnuflmfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853216.2088826-857-252058653142979/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:36 compute-0 sudo[229081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:36 compute-0 python3.9[229083]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:36 compute-0 systemd[1]: Started libpod-conmon-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.scope.
Jan 31 09:53:36 compute-0 podman[229084]: 2026-01-31 09:53:36.841382298 +0000 UTC m=+0.099693594 container exec 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:53:36 compute-0 podman[229084]: 2026-01-31 09:53:36.873664859 +0000 UTC m=+0.131976135 container exec_died 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:53:36 compute-0 systemd[1]: libpod-conmon-81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882.scope: Deactivated successfully.
Jan 31 09:53:36 compute-0 sudo[229081]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:37 compute-0 sudo[229262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrwiwrfnqyvnmgpczvxssarvmyztsfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853217.1247044-865-132192381704807/AnsiballZ_file.py'
Jan 31 09:53:37 compute-0 sudo[229262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:37 compute-0 python3.9[229264]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:37 compute-0 sudo[229262]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:38 compute-0 sudo[229414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkuattohkbcsaniwlnrmiywjdlrryeui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853217.894135-874-150004337766359/AnsiballZ_podman_container_info.py'
Jan 31 09:53:38 compute-0 sudo[229414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:38 compute-0 python3.9[229416]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 31 09:53:38 compute-0 sudo[229414]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:38 compute-0 podman[229509]: 2026-01-31 09:53:38.997142457 +0000 UTC m=+0.115818899 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:53:39 compute-0 sudo[229596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtagkcnsljltufmjqgjccikwhmjpczjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853218.7246535-882-67153738055118/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:39 compute-0 sudo[229596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:39 compute-0 python3.9[229598]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:39 compute-0 systemd[1]: Started libpod-conmon-2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.scope.
Jan 31 09:53:39 compute-0 podman[229599]: 2026-01-31 09:53:39.642016602 +0000 UTC m=+0.304595292 container exec 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543)
Jan 31 09:53:39 compute-0 podman[229599]: 2026-01-31 09:53:39.964712135 +0000 UTC m=+0.627290785 container exec_died 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=)
Jan 31 09:53:40 compute-0 systemd[1]: libpod-conmon-2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.scope: Deactivated successfully.
Jan 31 09:53:40 compute-0 sudo[229596]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:40 compute-0 sudo[229796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtvlxhqukfdqobxewfnnzoerpickfacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853220.3673406-890-29265966701542/AnsiballZ_podman_container_exec.py'
Jan 31 09:53:40 compute-0 sudo[229796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:40 compute-0 podman[229752]: 2026-01-31 09:53:40.711611835 +0000 UTC m=+0.073769089 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public)
Jan 31 09:53:40 compute-0 python3.9[229801]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 31 09:53:40 compute-0 systemd[1]: Started libpod-conmon-2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.scope.
Jan 31 09:53:40 compute-0 podman[229802]: 2026-01-31 09:53:40.997322441 +0000 UTC m=+0.092921191 container exec 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, name=ubi9, config_id=kepler, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 31 09:53:41 compute-0 podman[229802]: 2026-01-31 09:53:41.030273783 +0000 UTC m=+0.125872513 container exec_died 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, release-0.7.12=, config_id=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Jan 31 09:53:41 compute-0 systemd[1]: libpod-conmon-2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60.scope: Deactivated successfully.
Jan 31 09:53:41 compute-0 sudo[229796]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:41 compute-0 sudo[229983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crntxhcmkuarudobqwxrarrrhwlzaxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853221.2718968-898-39277945197653/AnsiballZ_file.py'
Jan 31 09:53:41 compute-0 sudo[229983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:41 compute-0 python3.9[229985]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:41 compute-0 sudo[229983]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:42 compute-0 sudo[230135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shhdknolkpmctjpmbfnxuuclyjxmfdzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853222.0557592-907-167795393013468/AnsiballZ_file.py'
Jan 31 09:53:42 compute-0 sudo[230135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:42 compute-0 python3.9[230137]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:42 compute-0 sudo[230135]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:43 compute-0 sudo[230287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdrzlhwagibpvmuzsblzxltwlqxjwjko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853222.7750974-915-245878406093683/AnsiballZ_stat.py'
Jan 31 09:53:43 compute-0 sudo[230287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:43 compute-0 python3.9[230289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:43 compute-0 sudo[230287]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:43 compute-0 sudo[230410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjsayqykdjveaczhholikbhggqaozwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853222.7750974-915-245878406093683/AnsiballZ_copy.py'
Jan 31 09:53:43 compute-0 sudo[230410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:43 compute-0 python3.9[230412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769853222.7750974-915-245878406093683/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:43 compute-0 sudo[230410]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:44 compute-0 sudo[230562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olpuapjltgsbwmvctuerwbkccwhskpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853224.2222736-931-271982314702946/AnsiballZ_file.py'
Jan 31 09:53:44 compute-0 sudo[230562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:44 compute-0 python3.9[230564]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:44 compute-0 sudo[230562]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:45 compute-0 sudo[230714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpuulwtyqsnuznnitomkdnrpepixhsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853225.0006392-939-103064524941375/AnsiballZ_stat.py'
Jan 31 09:53:45 compute-0 sudo[230714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:45 compute-0 python3.9[230716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:45 compute-0 sudo[230714]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:45 compute-0 sudo[230792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nelnhlhxhajgjawgvpusyhxsxhtowxyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853225.0006392-939-103064524941375/AnsiballZ_file.py'
Jan 31 09:53:45 compute-0 sudo[230792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:46 compute-0 python3.9[230794]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:46 compute-0 sudo[230792]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:46 compute-0 sudo[230944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtoxwlupstatbawzsuyflwehyujkrzkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853226.4640472-951-71083643092158/AnsiballZ_stat.py'
Jan 31 09:53:46 compute-0 sudo[230944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:46 compute-0 python3.9[230946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:47 compute-0 sudo[230944]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:47 compute-0 podman[230947]: 2026-01-31 09:53:47.084539009 +0000 UTC m=+0.063337830 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:53:47 compute-0 sudo[231045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnencaitfrqwrcvskrbtgpylklnrvds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853226.4640472-951-71083643092158/AnsiballZ_file.py'
Jan 31 09:53:47 compute-0 sudo[231045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:47 compute-0 python3.9[231047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0mzm9b7v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:47 compute-0 sudo[231045]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:48 compute-0 sudo[231197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavtucduaroufvwucskjjwcjdohtqyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853227.7191849-963-58715115078999/AnsiballZ_stat.py'
Jan 31 09:53:48 compute-0 sudo[231197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:48 compute-0 python3.9[231199]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:48 compute-0 sudo[231197]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:48 compute-0 sudo[231275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjrwibybqfqfxrpcjlzxzpdzbxvbdgyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853227.7191849-963-58715115078999/AnsiballZ_file.py'
Jan 31 09:53:48 compute-0 sudo[231275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:48 compute-0 python3.9[231277]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:48 compute-0 sudo[231275]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:49 compute-0 sudo[231427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhytjillprkifzmxfkyuxiqfdptslync ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853228.8900528-976-72689351113754/AnsiballZ_command.py'
Jan 31 09:53:49 compute-0 sudo[231427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:49 compute-0 python3.9[231429]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:53:49 compute-0 sudo[231427]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:50 compute-0 sudo[231581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtihvzvwzizjbnrtmygylvjgafneontu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853229.6948066-984-38349129042656/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 09:53:50 compute-0 sudo[231581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:50 compute-0 python3[231583]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 09:53:50 compute-0 sudo[231581]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:50 compute-0 sudo[231733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tblrvwsrelkeqfqbymmtxjpcpcikuadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853230.705812-992-71922546696222/AnsiballZ_stat.py'
Jan 31 09:53:51 compute-0 sudo[231733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:51 compute-0 python3.9[231735]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:51 compute-0 sudo[231733]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:51 compute-0 sudo[231828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hihxlrayhemqxxlhnqvwjoletulkalop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853230.705812-992-71922546696222/AnsiballZ_file.py'
Jan 31 09:53:51 compute-0 sudo[231828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:51 compute-0 podman[231785]: 2026-01-31 09:53:51.51755153 +0000 UTC m=+0.083965097 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, release=1769056855, distribution-scope=public, config_id=openstack_network_exporter, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 09:53:51 compute-0 python3.9[231834]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:51 compute-0 sudo[231828]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:52 compute-0 sudo[231984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiagafifrrmbhiggzqsbxaqfwgfkpfeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853231.897758-1004-135474675105565/AnsiballZ_stat.py'
Jan 31 09:53:52 compute-0 sudo[231984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:52 compute-0 python3.9[231986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:52 compute-0 sudo[231984]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:52 compute-0 sudo[232062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gswwgswvvkgkjmciluwcgvnhipjtbtba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853231.897758-1004-135474675105565/AnsiballZ_file.py'
Jan 31 09:53:52 compute-0 sudo[232062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:52 compute-0 python3.9[232064]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:53 compute-0 sudo[232062]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:53 compute-0 sudo[232231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evzokxohpcjhlfzkichkyzchvvyhezyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853233.2336957-1016-228004664208209/AnsiballZ_stat.py'
Jan 31 09:53:53 compute-0 sudo[232231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:53 compute-0 podman[232188]: 2026-01-31 09:53:53.661919415 +0000 UTC m=+0.103720369 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 09:53:53 compute-0 python3.9[232235]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:53 compute-0 sudo[232231]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:54 compute-0 sudo[232311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyboniyadjksoxfxnptcbcxnfuklqobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853233.2336957-1016-228004664208209/AnsiballZ_file.py'
Jan 31 09:53:54 compute-0 sudo[232311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:54 compute-0 python3.9[232313]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:54 compute-0 sudo[232311]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:54 compute-0 sudo[232463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcwfvbvtcvnyayvpfdorzlzkfbikfsap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853234.5535302-1028-243886625336389/AnsiballZ_stat.py'
Jan 31 09:53:54 compute-0 sudo[232463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:55 compute-0 python3.9[232465]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:55 compute-0 sudo[232463]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:55 compute-0 sudo[232541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobrzrvgdyofjkygqyyxceyogltyqgzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853234.5535302-1028-243886625336389/AnsiballZ_file.py'
Jan 31 09:53:55 compute-0 sudo[232541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:55 compute-0 python3.9[232543]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:55 compute-0 sudo[232541]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:56 compute-0 sudo[232693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckfgukzltzbmfdxywrqsemnhczbfpavp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853235.7725334-1040-31713732523737/AnsiballZ_stat.py'
Jan 31 09:53:56 compute-0 sudo[232693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:56 compute-0 python3.9[232695]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:53:56 compute-0 sudo[232693]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:56 compute-0 sudo[232818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgaqfuqvfaazlvjpcydqtmwowyjfqozj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853235.7725334-1040-31713732523737/AnsiballZ_copy.py'
Jan 31 09:53:56 compute-0 sudo[232818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:57 compute-0 python3.9[232820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769853235.7725334-1040-31713732523737/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:57 compute-0 sudo[232818]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:57 compute-0 sudo[232970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hacdwqwiklueqrumascgaibfnqvewfli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853237.286854-1055-268228105965892/AnsiballZ_file.py'
Jan 31 09:53:57 compute-0 sudo[232970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:57 compute-0 python3.9[232972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:57 compute-0 sudo[232970]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:58 compute-0 sudo[233122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtcbwcbxjctcgqvddstveouptuezqsgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853238.062491-1063-228805308028707/AnsiballZ_command.py'
Jan 31 09:53:58 compute-0 sudo[233122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:58 compute-0 python3.9[233124]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:53:58 compute-0 sudo[233122]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:59 compute-0 sudo[233277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmrkhwzqxvaqlbdngmgwtxgscvdmwkka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853238.831139-1071-70496587102120/AnsiballZ_blockinfile.py'
Jan 31 09:53:59 compute-0 sudo[233277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:53:59 compute-0 python3.9[233279]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:53:59 compute-0 sudo[233277]: pam_unix(sudo:session): session closed for user root
Jan 31 09:53:59 compute-0 podman[201068]: time="2026-01-31T09:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:53:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:53:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3871 "" "Go-http-client/1.1"
Jan 31 09:54:00 compute-0 sudo[233429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekznvdxioldjtcavgxjuqsttsbtlbnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853239.833529-1080-117227326531201/AnsiballZ_command.py'
Jan 31 09:54:00 compute-0 sudo[233429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:00 compute-0 python3.9[233431]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:54:00 compute-0 sudo[233429]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:01 compute-0 sudo[233582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgnvplautkpccbztisliwjxnxipfcwyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853240.6815321-1088-103259565665512/AnsiballZ_stat.py'
Jan 31 09:54:01 compute-0 sudo[233582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:01 compute-0 python3.9[233584]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 09:54:01 compute-0 sudo[233582]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:01 compute-0 openstack_network_exporter[204162]: ERROR   09:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:54:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:54:01 compute-0 openstack_network_exporter[204162]: ERROR   09:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:54:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:54:01 compute-0 sudo[233758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yymhrgufdwwahskixrorzxilsxhrhmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853241.4849575-1096-98248737142345/AnsiballZ_command.py'
Jan 31 09:54:01 compute-0 sudo[233758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:01 compute-0 podman[233711]: 2026-01-31 09:54:01.843329427 +0000 UTC m=+0.084069189 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:54:01 compute-0 podman[233710]: 2026-01-31 09:54:01.881869562 +0000 UTC m=+0.123588412 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:54:02 compute-0 python3.9[233772]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:54:02 compute-0 sudo[233758]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:02 compute-0 sudo[233935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepiegjhjhaokwhxhokaxmgrlglhscgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853242.2784805-1104-46252233893584/AnsiballZ_file.py'
Jan 31 09:54:02 compute-0 sudo[233935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:02 compute-0 python3.9[233937]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:54:02 compute-0 sudo[233935]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:03 compute-0 sshd-session[212988]: Connection closed by 192.168.122.30 port 37054
Jan 31 09:54:03 compute-0 sshd-session[212985]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:54:03 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 31 09:54:03 compute-0 systemd[1]: session-26.scope: Consumed 1min 21.325s CPU time.
Jan 31 09:54:03 compute-0 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Jan 31 09:54:03 compute-0 systemd-logind[795]: Removed session 26.
Jan 31 09:54:05 compute-0 podman[233962]: 2026-01-31 09:54:05.955470678 +0000 UTC m=+0.085388722 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:54:09 compute-0 sshd-session[233985]: Accepted publickey for zuul from 192.168.122.30 port 35122 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 09:54:09 compute-0 systemd-logind[795]: New session 27 of user zuul.
Jan 31 09:54:09 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 31 09:54:09 compute-0 sshd-session[233985]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:54:09 compute-0 podman[233987]: 2026-01-31 09:54:09.318604642 +0000 UTC m=+0.060296473 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 31 09:54:10 compute-0 python3.9[234157]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:54:10 compute-0 podman[234183]: 2026-01-31 09:54:10.951267764 +0000 UTC m=+0.068318401 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=kepler, name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 09:54:11 compute-0 sudo[234330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqotaopkwkvuunsegwzbkmsxjtlcmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853251.0430086-29-65379440850086/AnsiballZ_systemd.py'
Jan 31 09:54:11 compute-0 sudo[234330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:11 compute-0 python3.9[234332]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 31 09:54:12 compute-0 sudo[234330]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:12 compute-0 sudo[234483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otbnyevedmkodldnhcwodsemntgondsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853252.2226262-37-85648776190709/AnsiballZ_setup.py'
Jan 31 09:54:12 compute-0 sudo[234483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:12 compute-0 python3.9[234485]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 09:54:13 compute-0 sudo[234483]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:13 compute-0 sudo[234567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmmzhddrrwhffnvtyvutvinuvwcbpst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853252.2226262-37-85648776190709/AnsiballZ_dnf.py'
Jan 31 09:54:13 compute-0 sudo[234567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:13 compute-0 python3.9[234569]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 09:54:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:54:16.415 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:54:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:54:16.415 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:54:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:54:16.416 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:54:16 compute-0 sudo[234567]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:17 compute-0 sudo[234736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hybhgzsipckkdrkfrnwowgcffmfjbsop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853256.7064235-49-103475608987750/AnsiballZ_stat.py'
Jan 31 09:54:17 compute-0 sudo[234736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:17 compute-0 podman[234699]: 2026-01-31 09:54:17.248179578 +0000 UTC m=+0.093177693 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:54:17 compute-0 python3.9[234749]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:54:17 compute-0 sudo[234736]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:18 compute-0 sudo[234870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-durbfdphbdokebmfpnqetuoktytdnljf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853256.7064235-49-103475608987750/AnsiballZ_copy.py'
Jan 31 09:54:18 compute-0 sudo[234870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:18 compute-0 python3.9[234872]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769853256.7064235-49-103475608987750/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:54:18 compute-0 sudo[234870]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:18 compute-0 sudo[235022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmgexnfjxdepttjxopzedqjjafikuvmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853258.4340577-64-172918261555484/AnsiballZ_file.py'
Jan 31 09:54:18 compute-0 sudo[235022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:19 compute-0 python3.9[235024]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:54:19 compute-0 sudo[235022]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:19 compute-0 sudo[235175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plgltetgnbmjmpvpyigqjizfxrrsccoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853259.4133735-72-12429788752610/AnsiballZ_stat.py'
Jan 31 09:54:19 compute-0 sudo[235175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:19 compute-0 python3.9[235177]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 09:54:19 compute-0 sudo[235175]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:20 compute-0 sudo[235298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjxqawkgntfqxblltjljxlknispjtcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853259.4133735-72-12429788752610/AnsiballZ_copy.py'
Jan 31 09:54:20 compute-0 sudo[235298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:20 compute-0 python3.9[235300]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1769853259.4133735-72-12429788752610/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 09:54:20 compute-0 sudo[235298]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:21 compute-0 sudo[235450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqqdduytvoajxxkqeldrqjxfjzoukpxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769853260.6814501-87-178304635992545/AnsiballZ_systemd.py'
Jan 31 09:54:21 compute-0 sudo[235450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:54:21 compute-0 python3.9[235452]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 09:54:21 compute-0 systemd[1]: Stopping System Logging Service...
Jan 31 09:54:21 compute-0 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 31 09:54:21 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 31 09:54:21 compute-0 systemd[1]: Stopped System Logging Service.
Jan 31 09:54:21 compute-0 systemd[1]: rsyslog.service: Consumed 3.553s CPU time, 9.3M memory peak, read 0B from disk, written 5.6M to disk.
Jan 31 09:54:21 compute-0 systemd[1]: Starting System Logging Service...
Jan 31 09:54:21 compute-0 rsyslogd[235457]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="235457" x-info="https://www.rsyslog.com"] start
Jan 31 09:54:21 compute-0 systemd[1]: Started System Logging Service.
Jan 31 09:54:21 compute-0 rsyslogd[235457]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:54:21 compute-0 rsyslogd[235457]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 31 09:54:21 compute-0 rsyslogd[235457]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 31 09:54:21 compute-0 rsyslogd[235457]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 31 09:54:21 compute-0 sudo[235450]: pam_unix(sudo:session): session closed for user root
Jan 31 09:54:21 compute-0 podman[235456]: 2026-01-31 09:54:21.995406831 +0000 UTC m=+0.114728314 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9/ubi-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Jan 31 09:54:22 compute-0 rsyslogd[235457]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 31 09:54:22 compute-0 sshd-session[233997]: Connection closed by 192.168.122.30 port 35122
Jan 31 09:54:22 compute-0 sshd-session[233985]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:54:22 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 31 09:54:22 compute-0 systemd[1]: session-27.scope: Consumed 9.415s CPU time.
Jan 31 09:54:22 compute-0 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Jan 31 09:54:22 compute-0 systemd-logind[795]: Removed session 27.
Jan 31 09:54:23 compute-0 podman[235503]: 2026-01-31 09:54:23.968115558 +0000 UTC m=+0.081701323 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 09:54:28 compute-0 nova_compute[185194]: 2026-01-31 09:54:28.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:28 compute-0 nova_compute[185194]: 2026-01-31 09:54:28.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:54:28 compute-0 nova_compute[185194]: 2026-01-31 09:54:28.638 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:54:28 compute-0 nova_compute[185194]: 2026-01-31 09:54:28.639 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:28 compute-0 nova_compute[185194]: 2026-01-31 09:54:28.639 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:54:28 compute-0 nova_compute[185194]: 2026-01-31 09:54:28.660 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:29 compute-0 podman[201068]: time="2026-01-31T09:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:54:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:54:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3874 "" "Go-http-client/1.1"
Jan 31 09:54:30 compute-0 nova_compute[185194]: 2026-01-31 09:54:30.687 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.694 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.695 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.695 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.711 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:54:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:54:31 compute-0 openstack_network_exporter[204162]: ERROR   09:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:54:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:54:31 compute-0 openstack_network_exporter[204162]: ERROR   09:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:54:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:54:31 compute-0 nova_compute[185194]: 2026-01-31 09:54:31.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:31 compute-0 nova_compute[185194]: 2026-01-31 09:54:31.635 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:54:31 compute-0 nova_compute[185194]: 2026-01-31 09:54:31.636 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:54:31 compute-0 nova_compute[185194]: 2026-01-31 09:54:31.637 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:54:31 compute-0 nova_compute[185194]: 2026-01-31 09:54:31.637 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:54:32 compute-0 podman[235520]: 2026-01-31 09:54:32.005509599 +0000 UTC m=+0.123429454 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.066 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.067 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5721MB free_disk=72.45633697509766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.067 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.067 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:54:32 compute-0 podman[235540]: 2026-01-31 09:54:32.097957836 +0000 UTC m=+0.091069918 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.234 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.235 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.369 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.488 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.489 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.510 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.539 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.571 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.585 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.587 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:54:32 compute-0 nova_compute[185194]: 2026-01-31 09:54:32.587 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.588 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.589 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.589 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.622 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:54:33 compute-0 nova_compute[185194]: 2026-01-31 09:54:33.622 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:34 compute-0 nova_compute[185194]: 2026-01-31 09:54:34.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:35 compute-0 nova_compute[185194]: 2026-01-31 09:54:35.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:54:35 compute-0 nova_compute[185194]: 2026-01-31 09:54:35.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:54:36 compute-0 podman[235565]: 2026-01-31 09:54:36.969131499 +0000 UTC m=+0.095284879 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 09:54:39 compute-0 podman[235586]: 2026-01-31 09:54:39.977080454 +0000 UTC m=+0.099061982 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 09:54:41 compute-0 podman[235605]: 2026-01-31 09:54:41.955104497 +0000 UTC m=+0.076292295 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Jan 31 09:54:47 compute-0 podman[235626]: 2026-01-31 09:54:47.973265285 +0000 UTC m=+0.094964183 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 09:54:52 compute-0 podman[235651]: 2026-01-31 09:54:52.933596174 +0000 UTC m=+0.061497463 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Jan 31 09:54:54 compute-0 podman[235671]: 2026-01-31 09:54:54.991739756 +0000 UTC m=+0.109670633 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 09:54:59 compute-0 podman[201068]: time="2026-01-31T09:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:54:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:54:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3870 "" "Go-http-client/1.1"
Jan 31 09:55:01 compute-0 openstack_network_exporter[204162]: ERROR   09:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:55:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:55:01 compute-0 openstack_network_exporter[204162]: ERROR   09:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:55:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:55:02 compute-0 sshd-session[235689]: Accepted publickey for zuul from 38.102.83.5 port 51020 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 09:55:02 compute-0 systemd-logind[795]: New session 28 of user zuul.
Jan 31 09:55:02 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 31 09:55:02 compute-0 sshd-session[235689]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:55:02 compute-0 podman[235693]: 2026-01-31 09:55:02.472687246 +0000 UTC m=+0.121433480 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Jan 31 09:55:02 compute-0 podman[235691]: 2026-01-31 09:55:02.485030945 +0000 UTC m=+0.134085186 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Jan 31 09:55:03 compute-0 python3[235911]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:55:05 compute-0 sudo[236132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llcjjsfakbpbrepcgehxnzxqkwqfmhwx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853304.6864247-37126-183436924583760/AnsiballZ_command.py'
Jan 31 09:55:05 compute-0 sudo[236132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:55:05 compute-0 python3[236134]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:55:05 compute-0 sudo[236132]: pam_unix(sudo:session): session closed for user root
Jan 31 09:55:05 compute-0 sudo[236285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suekkekrrzcnuepfjjfaumqmuxvhgolh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853305.6709878-37137-176945961764741/AnsiballZ_command.py'
Jan 31 09:55:05 compute-0 sudo[236285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:55:06 compute-0 python3[236287]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:55:07 compute-0 sudo[236285]: pam_unix(sudo:session): session closed for user root
Jan 31 09:55:07 compute-0 podman[236296]: 2026-01-31 09:55:07.936606421 +0000 UTC m=+0.058810944 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 09:55:08 compute-0 python3[236463]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 09:55:09 compute-0 sudo[236614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuqntzavuagzhvwjruegqbwfxiscxuh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853309.3798919-37183-91167247505686/AnsiballZ_setup.py'
Jan 31 09:55:09 compute-0 sudo[236614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:55:09 compute-0 python3[236616]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 09:55:10 compute-0 sudo[236614]: pam_unix(sudo:session): session closed for user root
Jan 31 09:55:10 compute-0 podman[236691]: 2026-01-31 09:55:10.97412637 +0000 UTC m=+0.093798818 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 31 09:55:11 compute-0 sudo[236859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbcqitcpsmqozjnurlhuxsceoqmlynfp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853311.4945045-37214-150496308119289/AnsiballZ_command.py'
Jan 31 09:55:11 compute-0 sudo[236859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:55:11 compute-0 python3[236861]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:55:12 compute-0 sudo[236859]: pam_unix(sudo:session): session closed for user root
Jan 31 09:55:12 compute-0 sudo[237038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fghszvfodojudoyblyirzbplvqoldjqh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769853312.452628-37231-19563701549408/AnsiballZ_command.py'
Jan 31 09:55:12 compute-0 sudo[237038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:55:12 compute-0 podman[237000]: 2026-01-31 09:55:12.81398422 +0000 UTC m=+0.096689231 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, version=9.4, config_id=kepler, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 31 09:55:12 compute-0 python3[237047]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 09:55:13 compute-0 sudo[237038]: pam_unix(sudo:session): session closed for user root
Jan 31 09:55:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:55:16.417 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:55:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:55:16.418 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:55:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:55:16.418 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:55:18 compute-0 podman[237086]: 2026-01-31 09:55:18.931588696 +0000 UTC m=+0.059069809 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 09:55:24 compute-0 podman[237111]: 2026-01-31 09:55:24.002403251 +0000 UTC m=+0.120593947 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, release=1769056855, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container)
Jan 31 09:55:26 compute-0 podman[237131]: 2026-01-31 09:55:26.012498253 +0000 UTC m=+0.135014931 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:55:29 compute-0 podman[201068]: time="2026-01-31T09:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:55:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:55:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3873 "" "Go-http-client/1.1"
Jan 31 09:55:31 compute-0 openstack_network_exporter[204162]: ERROR   09:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:55:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:55:31 compute-0 openstack_network_exporter[204162]: ERROR   09:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:55:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:55:32 compute-0 nova_compute[185194]: 2026-01-31 09:55:32.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:32 compute-0 nova_compute[185194]: 2026-01-31 09:55:32.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:32 compute-0 nova_compute[185194]: 2026-01-31 09:55:32.617 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:32 compute-0 podman[237150]: 2026-01-31 09:55:32.959852315 +0000 UTC m=+0.080522312 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 09:55:32 compute-0 podman[237149]: 2026-01-31 09:55:32.985477715 +0000 UTC m=+0.106037329 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 09:55:33 compute-0 nova_compute[185194]: 2026-01-31 09:55:33.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:33 compute-0 nova_compute[185194]: 2026-01-31 09:55:33.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:33 compute-0 nova_compute[185194]: 2026-01-31 09:55:33.658 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:55:33 compute-0 nova_compute[185194]: 2026-01-31 09:55:33.658 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:55:33 compute-0 nova_compute[185194]: 2026-01-31 09:55:33.659 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:55:33 compute-0 nova_compute[185194]: 2026-01-31 09:55:33.659 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.030 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.032 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5695MB free_disk=72.45615005493164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.033 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.033 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.119 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.119 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.156 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.487 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.488 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:55:34 compute-0 nova_compute[185194]: 2026-01-31 09:55:34.488 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:55:35 compute-0 nova_compute[185194]: 2026-01-31 09:55:35.489 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:35 compute-0 nova_compute[185194]: 2026-01-31 09:55:35.490 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:35 compute-0 nova_compute[185194]: 2026-01-31 09:55:35.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:35 compute-0 nova_compute[185194]: 2026-01-31 09:55:35.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:55:35 compute-0 nova_compute[185194]: 2026-01-31 09:55:35.608 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:55:36 compute-0 nova_compute[185194]: 2026-01-31 09:55:36.106 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:55:36 compute-0 nova_compute[185194]: 2026-01-31 09:55:36.108 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:36 compute-0 nova_compute[185194]: 2026-01-31 09:55:36.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:55:36 compute-0 nova_compute[185194]: 2026-01-31 09:55:36.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:55:38 compute-0 podman[237195]: 2026-01-31 09:55:38.973519458 +0000 UTC m=+0.087367628 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 09:55:42 compute-0 podman[237219]: 2026-01-31 09:55:42.005992632 +0000 UTC m=+0.120979517 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 09:55:42 compute-0 podman[237238]: 2026-01-31 09:55:42.974663944 +0000 UTC m=+0.096331194 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543)
Jan 31 09:55:49 compute-0 podman[237259]: 2026-01-31 09:55:49.965797351 +0000 UTC m=+0.088119049 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 09:55:55 compute-0 podman[237281]: 2026-01-31 09:55:55.011910754 +0000 UTC m=+0.123703062 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z)
Jan 31 09:55:56 compute-0 podman[237303]: 2026-01-31 09:55:56.970809936 +0000 UTC m=+0.099041768 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 09:55:59 compute-0 podman[201068]: time="2026-01-31T09:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:55:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:55:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3881 "" "Go-http-client/1.1"
Jan 31 09:56:01 compute-0 openstack_network_exporter[204162]: ERROR   09:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:56:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:56:01 compute-0 openstack_network_exporter[204162]: ERROR   09:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:56:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:56:03 compute-0 podman[237323]: 2026-01-31 09:56:03.966100276 +0000 UTC m=+0.088254862 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 31 09:56:04 compute-0 podman[237322]: 2026-01-31 09:56:04.001506504 +0000 UTC m=+0.126042165 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 09:56:09 compute-0 podman[237365]: 2026-01-31 09:56:09.971775613 +0000 UTC m=+0.089662691 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 09:56:12 compute-0 sshd-session[235711]: Received disconnect from 38.102.83.5 port 51020:11: disconnected by user
Jan 31 09:56:12 compute-0 sshd-session[235711]: Disconnected from user zuul 38.102.83.5 port 51020
Jan 31 09:56:12 compute-0 sshd-session[235689]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:56:12 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 31 09:56:12 compute-0 systemd[1]: session-28.scope: Consumed 8.436s CPU time.
Jan 31 09:56:12 compute-0 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Jan 31 09:56:12 compute-0 systemd-logind[795]: Removed session 28.
Jan 31 09:56:12 compute-0 podman[237388]: 2026-01-31 09:56:12.842184117 +0000 UTC m=+0.092539860 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:56:13 compute-0 podman[237408]: 2026-01-31 09:56:13.9883222 +0000 UTC m=+0.109749080 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Jan 31 09:56:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:56:16.419 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:56:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:56:16.420 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:56:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:56:16.420 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:56:20 compute-0 podman[237428]: 2026-01-31 09:56:20.94408357 +0000 UTC m=+0.072130632 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:56:25 compute-0 podman[237451]: 2026-01-31 09:56:25.93743523 +0000 UTC m=+0.064522965 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, architecture=x86_64)
Jan 31 09:56:27 compute-0 podman[237473]: 2026-01-31 09:56:27.934136904 +0000 UTC m=+0.065116330 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:56:29 compute-0 podman[201068]: time="2026-01-31T09:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:56:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:56:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3881 "" "Go-http-client/1.1"
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.695 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.696 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.700 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.714 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948d26630>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'disk.device.usage': [], 'network.outgoing.bytes.rate': [], 'disk.device.write.latency': [], 'network.incoming.packets.error': [], 'disk.device.allocation': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.715 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:56:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:56:31 compute-0 openstack_network_exporter[204162]: ERROR   09:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:56:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:56:31 compute-0 openstack_network_exporter[204162]: ERROR   09:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:56:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:56:32 compute-0 nova_compute[185194]: 2026-01-31 09:56:32.601 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:33 compute-0 nova_compute[185194]: 2026-01-31 09:56:33.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:33 compute-0 nova_compute[185194]: 2026-01-31 09:56:33.637 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:56:33 compute-0 nova_compute[185194]: 2026-01-31 09:56:33.638 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:56:33 compute-0 nova_compute[185194]: 2026-01-31 09:56:33.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:56:33 compute-0 nova_compute[185194]: 2026-01-31 09:56:33.639 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.007 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.008 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5686MB free_disk=72.45616912841797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.008 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.008 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.074 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.074 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.097 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.111 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.112 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:56:34 compute-0 nova_compute[185194]: 2026-01-31 09:56:34.112 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:56:35 compute-0 podman[237493]: 2026-01-31 09:56:35.001251025 +0000 UTC m=+0.128196939 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 09:56:35 compute-0 podman[237494]: 2026-01-31 09:56:35.014083304 +0000 UTC m=+0.131182433 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:56:35 compute-0 nova_compute[185194]: 2026-01-31 09:56:35.111 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:35 compute-0 nova_compute[185194]: 2026-01-31 09:56:35.111 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:35 compute-0 nova_compute[185194]: 2026-01-31 09:56:35.112 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:35 compute-0 nova_compute[185194]: 2026-01-31 09:56:35.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:35 compute-0 nova_compute[185194]: 2026-01-31 09:56:35.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:37 compute-0 nova_compute[185194]: 2026-01-31 09:56:37.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:37 compute-0 nova_compute[185194]: 2026-01-31 09:56:37.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:56:37 compute-0 nova_compute[185194]: 2026-01-31 09:56:37.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:56:37 compute-0 nova_compute[185194]: 2026-01-31 09:56:37.665 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:56:37 compute-0 nova_compute[185194]: 2026-01-31 09:56:37.665 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:56:37 compute-0 nova_compute[185194]: 2026-01-31 09:56:37.666 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:56:40 compute-0 podman[237536]: 2026-01-31 09:56:40.95617115 +0000 UTC m=+0.081381814 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 09:56:43 compute-0 podman[237560]: 2026-01-31 09:56:43.957056505 +0000 UTC m=+0.081899087 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:56:44 compute-0 podman[237580]: 2026-01-31 09:56:44.760634216 +0000 UTC m=+0.103920965 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.expose-services=, release-0.7.12=)
Jan 31 09:56:51 compute-0 podman[237601]: 2026-01-31 09:56:51.957127563 +0000 UTC m=+0.083706523 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:56:56 compute-0 podman[237624]: 2026-01-31 09:56:56.989693424 +0000 UTC m=+0.115742209 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, version=9.7, io.openshift.expose-services=)
Jan 31 09:56:58 compute-0 podman[237645]: 2026-01-31 09:56:58.968652849 +0000 UTC m=+0.091510467 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:56:59 compute-0 podman[201068]: time="2026-01-31T09:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:56:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:56:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3876 "" "Go-http-client/1.1"
Jan 31 09:57:01 compute-0 openstack_network_exporter[204162]: ERROR   09:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:57:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:57:01 compute-0 openstack_network_exporter[204162]: ERROR   09:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:57:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:57:05 compute-0 podman[237665]: 2026-01-31 09:57:05.976065084 +0000 UTC m=+0.094942991 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:57:05 compute-0 podman[237664]: 2026-01-31 09:57:05.987016257 +0000 UTC m=+0.106413347 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:57:10 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:57:10.956 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:57:10 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:57:10.958 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:57:10 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:57:10.959 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:57:11 compute-0 podman[237709]: 2026-01-31 09:57:11.980933291 +0000 UTC m=+0.106856788 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 09:57:14 compute-0 podman[237733]: 2026-01-31 09:57:14.765623902 +0000 UTC m=+0.092966843 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 09:57:14 compute-0 podman[237753]: 2026-01-31 09:57:14.858661315 +0000 UTC m=+0.064045554 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, release-0.7.12=)
Jan 31 09:57:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:57:16.420 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:57:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:57:16.420 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:57:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:57:16.420 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:57:23 compute-0 podman[237775]: 2026-01-31 09:57:23.000896386 +0000 UTC m=+0.123996264 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:57:27 compute-0 podman[237799]: 2026-01-31 09:57:27.943137181 +0000 UTC m=+0.067061939 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, distribution-scope=public, version=9.7, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 09:57:29 compute-0 podman[201068]: time="2026-01-31T09:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:57:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:57:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3877 "" "Go-http-client/1.1"
Jan 31 09:57:29 compute-0 podman[237820]: 2026-01-31 09:57:29.984069247 +0000 UTC m=+0.109336710 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 09:57:31 compute-0 openstack_network_exporter[204162]: ERROR   09:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:57:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:57:31 compute-0 openstack_network_exporter[204162]: ERROR   09:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:57:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:57:33 compute-0 nova_compute[185194]: 2026-01-31 09:57:33.660 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:34 compute-0 nova_compute[185194]: 2026-01-31 09:57:34.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:34 compute-0 nova_compute[185194]: 2026-01-31 09:57:34.674 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:57:34 compute-0 nova_compute[185194]: 2026-01-31 09:57:34.675 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:57:34 compute-0 nova_compute[185194]: 2026-01-31 09:57:34.675 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:57:34 compute-0 nova_compute[185194]: 2026-01-31 09:57:34.676 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.071 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.072 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5693MB free_disk=72.45654296875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.072 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.073 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.401 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.401 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.424 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.444 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.445 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:57:35 compute-0 nova_compute[185194]: 2026-01-31 09:57:35.445 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:57:36 compute-0 nova_compute[185194]: 2026-01-31 09:57:36.445 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:36 compute-0 nova_compute[185194]: 2026-01-31 09:57:36.446 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:36 compute-0 nova_compute[185194]: 2026-01-31 09:57:36.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:36 compute-0 nova_compute[185194]: 2026-01-31 09:57:36.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:36 compute-0 podman[237840]: 2026-01-31 09:57:36.972918741 +0000 UTC m=+0.089996489 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:57:36 compute-0 podman[237839]: 2026-01-31 09:57:36.980459614 +0000 UTC m=+0.105830834 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:57:37 compute-0 nova_compute[185194]: 2026-01-31 09:57:37.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:37 compute-0 nova_compute[185194]: 2026-01-31 09:57:37.619 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:37 compute-0 nova_compute[185194]: 2026-01-31 09:57:37.620 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:57:37 compute-0 nova_compute[185194]: 2026-01-31 09:57:37.621 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:57:37 compute-0 nova_compute[185194]: 2026-01-31 09:57:37.637 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:57:37 compute-0 nova_compute[185194]: 2026-01-31 09:57:37.638 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:39 compute-0 nova_compute[185194]: 2026-01-31 09:57:39.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:57:39 compute-0 nova_compute[185194]: 2026-01-31 09:57:39.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:57:42 compute-0 podman[237880]: 2026-01-31 09:57:42.955480317 +0000 UTC m=+0.077571596 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:57:44 compute-0 podman[237903]: 2026-01-31 09:57:44.946378476 +0000 UTC m=+0.065079783 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 31 09:57:44 compute-0 podman[237904]: 2026-01-31 09:57:44.989838662 +0000 UTC m=+0.102838911 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, version=9.4)
Jan 31 09:57:53 compute-0 podman[237944]: 2026-01-31 09:57:53.967668958 +0000 UTC m=+0.096653290 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 09:57:58 compute-0 podman[237968]: 2026-01-31 09:57:58.995014527 +0000 UTC m=+0.117490938 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1769056855, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 09:57:59 compute-0 podman[201068]: time="2026-01-31T09:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:57:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:57:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Jan 31 09:58:00 compute-0 podman[237990]: 2026-01-31 09:58:00.959467984 +0000 UTC m=+0.078895789 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 09:58:01 compute-0 openstack_network_exporter[204162]: ERROR   09:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:58:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:58:01 compute-0 openstack_network_exporter[204162]: ERROR   09:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:58:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:58:07 compute-0 podman[238010]: 2026-01-31 09:58:07.999803419 +0000 UTC m=+0.112336682 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 09:58:08 compute-0 podman[238009]: 2026-01-31 09:58:08.003669592 +0000 UTC m=+0.122068757 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 09:58:13 compute-0 podman[238053]: 2026-01-31 09:58:13.946342608 +0000 UTC m=+0.072037652 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 09:58:15 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:15.298 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:58:15 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:15.299 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:58:15 compute-0 podman[238077]: 2026-01-31 09:58:15.957006007 +0000 UTC m=+0.085474218 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release-0.7.12=, config_id=kepler, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.tags=base rhel9, io.openshift.expose-services=)
Jan 31 09:58:15 compute-0 podman[238078]: 2026-01-31 09:58:15.990836369 +0000 UTC m=+0.107674358 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 31 09:58:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:16.300 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:16.421 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:16.422 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:16.422 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:24 compute-0 podman[238118]: 2026-01-31 09:58:24.981767793 +0000 UTC m=+0.102876611 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:58:29 compute-0 podman[201068]: time="2026-01-31T09:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:58:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 09:58:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3873 "" "Go-http-client/1.1"
Jan 31 09:58:30 compute-0 podman[238142]: 2026-01-31 09:58:30.006610092 +0000 UTC m=+0.123426631 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.696 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.696 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.696 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.701 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.702 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.703 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.704 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.705 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.706 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.707 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.712 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.713 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 09:58:30.714 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.007 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.007 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.027 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.144 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.145 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.156 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.157 185198 INFO nova.compute.claims [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.308 185198 DEBUG nova.compute.provider_tree [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.332 185198 DEBUG nova.scheduler.client.report [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.361 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.362 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.415 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.417 185198 DEBUG nova.network.neutron [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:58:31 compute-0 openstack_network_exporter[204162]: ERROR   09:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:58:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:58:31 compute-0 openstack_network_exporter[204162]: ERROR   09:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:58:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.456 185198 INFO nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.503 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.599 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.600 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.600 185198 INFO nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Creating image(s)
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.601 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.601 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.602 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.602 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:31 compute-0 nova_compute[185194]: 2026-01-31 09:58:31.603 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:31 compute-0 podman[238164]: 2026-01-31 09:58:31.968902545 +0000 UTC m=+0.087456196 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 09:58:32 compute-0 nova_compute[185194]: 2026-01-31 09:58:32.070 185198 WARNING oslo_policy.policy [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 31 09:58:32 compute-0 nova_compute[185194]: 2026-01-31 09:58:32.071 185198 WARNING oslo_policy.policy [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 31 09:58:32 compute-0 nova_compute[185194]: 2026-01-31 09:58:32.822 185198 DEBUG nova.network.neutron [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Successfully created port: fea1caad-7786-4490-a707-f79cc6ff5fef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:58:32 compute-0 nova_compute[185194]: 2026-01-31 09:58:32.937 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.013 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.part --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.014 185198 DEBUG nova.virt.images [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] 8b57d666-88c0-4e62-a76a-0d45801ca1a6 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.017 185198 DEBUG nova.privsep.utils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.017 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.part /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.248 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.part /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.converted" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.252 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.318 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.319 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.331 185198 INFO oslo.privsep.daemon [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpglaj2s2c/privsep.sock']
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.915 185198 INFO oslo.privsep.daemon [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Spawned new privsep daemon via rootwrap
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.807 238202 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.811 238202 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.812 238202 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 09:58:33 compute-0 nova_compute[185194]: 2026-01-31 09:58:33.813 238202 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238202
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.005 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.058 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.060 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.061 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.087 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.135 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.137 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.177 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.178 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.180 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.233 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.234 185198 DEBUG nova.virt.disk.api [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Checking if we can resize image /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.235 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.313 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.314 185198 DEBUG nova.virt.disk.api [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Cannot resize image /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.315 185198 DEBUG nova.objects.instance [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'migration_context' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.331 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.332 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.333 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.333 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.334 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.336 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.356 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.358 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.396 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.397 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.415 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.464 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.465 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.466 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.494 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.550 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.552 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.588 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.589 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.590 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.642 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.643 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.644 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Ensure instance console log exists: /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.644 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.645 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:34 compute-0 nova_compute[185194]: 2026-01-31 09:58:34.645 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.320 185198 DEBUG nova.network.neutron [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Successfully updated port: fea1caad-7786-4490-a707-f79cc6ff5fef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.347 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.347 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.347 185198 DEBUG nova.network.neutron [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.637 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.638 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.640 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.822 185198 DEBUG nova.compute.manager [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-changed-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.822 185198 DEBUG nova.compute.manager [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Refreshing instance network info cache due to event network-changed-fea1caad-7786-4490-a707-f79cc6ff5fef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:58:35 compute-0 nova_compute[185194]: 2026-01-31 09:58:35.823 185198 DEBUG oslo_concurrency.lockutils [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.003 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.004 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5639MB free_disk=72.42586898803711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.005 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.005 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.061 185198 DEBUG nova.network.neutron [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.081 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.081 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.082 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.139 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.172 185198 ERROR nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [req-0b31d108-79a0-4bc8-a6ca-25336404d0cf] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 1f8a458f-baaf-434f-841c-59d735622205.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-0b31d108-79a0-4bc8-a6ca-25336404d0cf"}]}
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.197 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.214 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.214 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.229 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.263 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.309 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.362 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updated inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.362 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating resource provider 1f8a458f-baaf-434f-841c-59d735622205 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.362 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.386 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:58:36 compute-0 nova_compute[185194]: 2026-01-31 09:58:36.386 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.289 185198 DEBUG nova.network.neutron [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.315 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.316 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Instance network_info: |[{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.317 185198 DEBUG oslo_concurrency.lockutils [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.318 185198 DEBUG nova.network.neutron [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Refreshing network info cache for port fea1caad-7786-4490-a707-f79cc6ff5fef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.325 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Start _get_guest_xml network_info=[{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 1, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.336 185198 WARNING nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.349 185198 DEBUG nova.virt.libvirt.host [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.350 185198 DEBUG nova.virt.libvirt.host [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.358 185198 DEBUG nova.virt.libvirt.host [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.359 185198 DEBUG nova.virt.libvirt.host [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.361 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.361 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T09:57:04Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='5ace5526-788a-41cf-9e40-e75da8858688',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.362 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.362 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.363 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.363 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.363 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.364 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.364 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.364 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.365 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.365 185198 DEBUG nova.virt.hardware [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.371 185198 DEBUG nova.privsep.utils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.373 185198 DEBUG nova.virt.libvirt.vif [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-ec42yxyw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:58:31Z,user_data=None,user_id='d3342a7282114996b6010246d4ade24e',uuid=a6212880-427f-4876-8598-06909416bde1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.373 185198 DEBUG nova.network.os_vif_util [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.375 185198 DEBUG nova.network.os_vif_util [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.377 185198 DEBUG nova.objects.instance [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.387 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.398 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <uuid>a6212880-427f-4876-8598-06909416bde1</uuid>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <name>instance-00000001</name>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <memory>524288</memory>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <metadata>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:name>test_0</nova:name>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 09:58:37</nova:creationTime>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:flavor name="m1.small">
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:memory>512</nova:memory>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:ephemeral>1</nova:ephemeral>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:user uuid="d3342a7282114996b6010246d4ade24e">admin</nova:user>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:project uuid="155389cbed6644acacdbeeb6155adb54">admin</nova:project>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="8b57d666-88c0-4e62-a76a-0d45801ca1a6"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         <nova:port uuid="fea1caad-7786-4490-a707-f79cc6ff5fef">
Jan 31 09:58:37 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="192.168.0.202" ipVersion="4"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </metadata>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <system>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <entry name="serial">a6212880-427f-4876-8598-06909416bde1</entry>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <entry name="uuid">a6212880-427f-4876-8598-06909416bde1</entry>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </system>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <os>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </os>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <features>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <apic/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </features>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </clock>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </cpu>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   <devices>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <target dev="vdb" bus="virtio"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.config"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:e0:7b:f6"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <target dev="tapfea1caad-77"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </interface>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/console.log" append="off"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </serial>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <video>
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </video>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </rng>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 09:58:37 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 09:58:37 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 09:58:37 compute-0 nova_compute[185194]:   </devices>
Jan 31 09:58:37 compute-0 nova_compute[185194]: </domain>
Jan 31 09:58:37 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.399 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Preparing to wait for external event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.400 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.400 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.400 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.401 185198 DEBUG nova.virt.libvirt.vif [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-ec42yxyw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:58:31Z,user_data=None,user_id='d3342a7282114996b6010246d4ade24e',uuid=a6212880-427f-4876-8598-06909416bde1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.402 185198 DEBUG nova.network.os_vif_util [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.402 185198 DEBUG nova.network.os_vif_util [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.403 185198 DEBUG os_vif [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.434 185198 DEBUG ovsdbapp.backend.ovs_idl [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.434 185198 DEBUG ovsdbapp.backend.ovs_idl [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.435 185198 DEBUG ovsdbapp.backend.ovs_idl [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.436 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.436 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.437 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.438 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.454 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.454 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.455 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.456 185198 INFO oslo.privsep.daemon [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpf2xsfh67/privsep.sock']
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.625 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.626 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.627 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.627 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:37 compute-0 nova_compute[185194]: 2026-01-31 09:58:37.628 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.127 185198 INFO oslo.privsep.daemon [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Spawned new privsep daemon via rootwrap
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.029 238239 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.034 238239 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.036 238239 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.036 238239 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238239
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.434 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.435 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfea1caad-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.437 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfea1caad-77, col_values=(('external_ids', {'iface-id': 'fea1caad-7786-4490-a707-f79cc6ff5fef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:7b:f6', 'vm-uuid': 'a6212880-427f-4876-8598-06909416bde1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.439 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:38 compute-0 NetworkManager[56281]: <info>  [1769853518.4431] manager: (tapfea1caad-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.443 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.447 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.448 185198 INFO os_vif [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77')
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.517 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.517 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.518 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.518 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No VIF found with MAC fa:16:3e:e0:7b:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:58:38 compute-0 nova_compute[185194]: 2026-01-31 09:58:38.519 185198 INFO nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Using config drive
Jan 31 09:58:38 compute-0 podman[238246]: 2026-01-31 09:58:38.977180249 +0000 UTC m=+0.095993883 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 31 09:58:38 compute-0 podman[238245]: 2026-01-31 09:58:38.99976181 +0000 UTC m=+0.117938306 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.331 185198 DEBUG nova.network.neutron [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Updated VIF entry in instance network info cache for port fea1caad-7786-4490-a707-f79cc6ff5fef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.331 185198 DEBUG nova.network.neutron [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.347 185198 INFO nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Creating config drive at /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.config
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.352 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpye6ksx2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.368 185198 DEBUG oslo_concurrency.lockutils [req-4757bba5-9601-4244-ae96-2211829c192d req-f90fa37e-f15e-48b1-92a2-2a5597a23687 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.473 185198 DEBUG oslo_concurrency.processutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpye6ksx2d" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:58:39 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 31 09:58:39 compute-0 kernel: tapfea1caad-77: entered promiscuous mode
Jan 31 09:58:39 compute-0 NetworkManager[56281]: <info>  [1769853519.5729] manager: (tapfea1caad-77): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 09:58:39 compute-0 ovn_controller[97627]: 2026-01-31T09:58:39Z|00027|binding|INFO|Claiming lport fea1caad-7786-4490-a707-f79cc6ff5fef for this chassis.
Jan 31 09:58:39 compute-0 ovn_controller[97627]: 2026-01-31T09:58:39Z|00028|binding|INFO|fea1caad-7786-4490-a707-f79cc6ff5fef: Claiming fa:16:3e:e0:7b:f6 192.168.0.202
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.579 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.585 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:58:39 compute-0 systemd-udevd[238310]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:58:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:39.613 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:7b:f6 192.168.0.202'], port_security=['fa:16:3e:e0:7b:f6 192.168.0.202'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.202/24', 'neutron:device_id': 'a6212880-427f-4876-8598-06909416bde1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=fea1caad-7786-4490-a707-f79cc6ff5fef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:58:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:39.614 106883 INFO neutron.agent.ovn.metadata.agent [-] Port fea1caad-7786-4490-a707-f79cc6ff5fef in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b bound to our chassis
Jan 31 09:58:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:39.616 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 09:58:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:39.617 106883 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpq9l263zz/privsep.sock']
Jan 31 09:58:39 compute-0 NetworkManager[56281]: <info>  [1769853519.6232] device (tapfea1caad-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:58:39 compute-0 NetworkManager[56281]: <info>  [1769853519.6247] device (tapfea1caad-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.647 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:39 compute-0 ovn_controller[97627]: 2026-01-31T09:58:39Z|00029|binding|INFO|Setting lport fea1caad-7786-4490-a707-f79cc6ff5fef ovn-installed in OVS
Jan 31 09:58:39 compute-0 ovn_controller[97627]: 2026-01-31T09:58:39Z|00030|binding|INFO|Setting lport fea1caad-7786-4490-a707-f79cc6ff5fef up in Southbound
Jan 31 09:58:39 compute-0 nova_compute[185194]: 2026-01-31 09:58:39.664 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:39 compute-0 systemd-machined[156556]: New machine qemu-1-instance-00000001.
Jan 31 09:58:39 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.130 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853520.1289966, a6212880-427f-4876-8598-06909416bde1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.131 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] VM Started (Lifecycle Event)
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.170 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.176 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853520.1291726, a6212880-427f-4876-8598-06909416bde1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.176 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] VM Paused (Lifecycle Event)
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.194 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.202 185198 DEBUG nova.compute.manager [req-ba6a1dc8-150c-401a-ba5e-19ead2200060 req-ea692d06-07e4-4592-9686-f28fd7693113 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.203 185198 DEBUG oslo_concurrency.lockutils [req-ba6a1dc8-150c-401a-ba5e-19ead2200060 req-ea692d06-07e4-4592-9686-f28fd7693113 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.203 185198 DEBUG oslo_concurrency.lockutils [req-ba6a1dc8-150c-401a-ba5e-19ead2200060 req-ea692d06-07e4-4592-9686-f28fd7693113 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.203 185198 DEBUG oslo_concurrency.lockutils [req-ba6a1dc8-150c-401a-ba5e-19ead2200060 req-ea692d06-07e4-4592-9686-f28fd7693113 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.204 185198 DEBUG nova.compute.manager [req-ba6a1dc8-150c-401a-ba5e-19ead2200060 req-ea692d06-07e4-4592-9686-f28fd7693113 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Processing event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.205 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.211 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.213 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853520.2105048, a6212880-427f-4876-8598-06909416bde1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.213 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] VM Resumed (Lifecycle Event)
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.219 185198 INFO nova.virt.libvirt.driver [-] [instance: a6212880-427f-4876-8598-06909416bde1] Instance spawned successfully.
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.220 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.257 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.264 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.278 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.279 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.280 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.281 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.282 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.283 185198 DEBUG nova.virt.libvirt.driver [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.289 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.333 106883 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.334 106883 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpq9l263zz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.171 238337 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.179 238337 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.184 238337 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.184 238337 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238337
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.338 185198 INFO nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Took 8.74 seconds to spawn the instance on the hypervisor.
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.339 185198 DEBUG nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.339 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3408541c-70ed-4058-9e38-283dc0683e5d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.411 185198 INFO nova.compute.manager [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Took 9.31 seconds to build instance.
Jan 31 09:58:40 compute-0 nova_compute[185194]: 2026-01-31 09:58:40.433 185198 DEBUG oslo_concurrency.lockutils [None req-a3206f71-fd95-40f5-be7a-dc320bff786f d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.826 238337 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.826 238337 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:40 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:40.826 238337 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 09:58:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.348 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f77f1df7-da39-491c-924d-11851c81fffe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.351 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap95411ff1-61 in ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.352 238337 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap95411ff1-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.353 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[57a10c05-ac3b-4e6d-9a59-60cc09d114d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.356 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc40802-9826-46f6-b5ce-74cfb074e83a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.374 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[00376690-ffac-461e-bd04-d7bed3323d33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.389 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[bfdf3e55-09c4-4176-9fa4-c32b41900035]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.391 106883 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpl8j44gt3/privsep.sock']
Jan 31 09:58:41 compute-0 nova_compute[185194]: 2026-01-31 09:58:41.588 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:42.031 106883 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:42.032 106883 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpl8j44gt3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.914 238370 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.919 238370 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.922 238370 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:41.922 238370 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238370
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:42.036 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5ff2ac-5081-428c-a9b9-c088fa8e54e9]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:42 compute-0 nova_compute[185194]: 2026-01-31 09:58:42.332 185198 DEBUG nova.compute.manager [req-7ef484f8-c621-45d3-86a0-b47163dbc9f0 req-249b3931-a85b-4929-b738-ab2b99fdc6ec cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:58:42 compute-0 nova_compute[185194]: 2026-01-31 09:58:42.333 185198 DEBUG oslo_concurrency.lockutils [req-7ef484f8-c621-45d3-86a0-b47163dbc9f0 req-249b3931-a85b-4929-b738-ab2b99fdc6ec cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:42 compute-0 nova_compute[185194]: 2026-01-31 09:58:42.333 185198 DEBUG oslo_concurrency.lockutils [req-7ef484f8-c621-45d3-86a0-b47163dbc9f0 req-249b3931-a85b-4929-b738-ab2b99fdc6ec cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:42 compute-0 nova_compute[185194]: 2026-01-31 09:58:42.334 185198 DEBUG oslo_concurrency.lockutils [req-7ef484f8-c621-45d3-86a0-b47163dbc9f0 req-249b3931-a85b-4929-b738-ab2b99fdc6ec cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:42 compute-0 nova_compute[185194]: 2026-01-31 09:58:42.334 185198 DEBUG nova.compute.manager [req-7ef484f8-c621-45d3-86a0-b47163dbc9f0 req-249b3931-a85b-4929-b738-ab2b99fdc6ec cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] No waiting events found dispatching network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:58:42 compute-0 nova_compute[185194]: 2026-01-31 09:58:42.335 185198 WARNING nova.compute.manager [req-7ef484f8-c621-45d3-86a0-b47163dbc9f0 req-249b3931-a85b-4929-b738-ab2b99fdc6ec cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received unexpected event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef for instance with vm_state active and task_state None.
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:42.518 238370 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:42.518 238370 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:58:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:42.518 238370 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.031 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[a88819c3-ebc7-46df-b4ef-aef5455d6ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.061 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[42a7a6ba-af42-4c36-b108-8613c7b05d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 NetworkManager[56281]: <info>  [1769853523.0625] manager: (tap95411ff1-60): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 09:58:43 compute-0 systemd-udevd[238382]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.095 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[a49e6e15-7ec0-4d4b-a928-945726889e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.099 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7797af-9b93-4727-8907-cc769edebc25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 NetworkManager[56281]: <info>  [1769853523.1291] device (tap95411ff1-60): carrier: link connected
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.133 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[cac80e6f-0333-4643-b30b-4cd414cbcc35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.149 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[5458202e-aa64-492c-bb86-9ae0da50f6c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 32852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238400, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.161 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[5b30c27f-dfa1-4a02-a4b4-7d3fccd4fc5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:290b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374461, 'tstamp': 374461}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238401, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.172 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a058be-0188-47f9-a718-b1527ecbf523]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 32852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 238402, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.191 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[998baff8-df90-4b49-b215-8babb2a95cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.238 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6c9893-1387-406d-bc22-d16e404c3288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.241 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.241 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.242 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:43 compute-0 kernel: tap95411ff1-60: entered promiscuous mode
Jan 31 09:58:43 compute-0 NetworkManager[56281]: <info>  [1769853523.2458] manager: (tap95411ff1-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 09:58:43 compute-0 nova_compute[185194]: 2026-01-31 09:58:43.248 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.249 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:58:43 compute-0 ovn_controller[97627]: 2026-01-31T09:58:43Z|00031|binding|INFO|Releasing lport aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49 from this chassis (sb_readonly=0)
Jan 31 09:58:43 compute-0 nova_compute[185194]: 2026-01-31 09:58:43.252 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.253 106883 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/95411ff1-6cab-4c5b-9ab6-3779c480de3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/95411ff1-6cab-4c5b-9ab6-3779c480de3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.258 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d054ae-a875-436d-8b51-2fe1295025fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.259 106883 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: global
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     log         /dev/log local0 debug
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     log-tag     haproxy-metadata-proxy-95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     user        root
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     group       root
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     maxconn     1024
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     pidfile     /var/lib/neutron/external/pids/95411ff1-6cab-4c5b-9ab6-3779c480de3b.pid.haproxy
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     daemon
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: defaults
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     log global
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     mode http
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     option httplog
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     option dontlognull
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     option http-server-close
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     option forwardfor
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     retries                 3
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     timeout http-request    30s
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     timeout connect         30s
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     timeout client          32s
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     timeout server          32s
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     timeout http-keep-alive 30s
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: listen listener
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     bind 169.254.169.254:80
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:     http-request add-header X-OVN-Network-ID 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:58:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:58:43.262 106883 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'env', 'PROCESS_TAG=haproxy-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/95411ff1-6cab-4c5b-9ab6-3779c480de3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:58:43 compute-0 nova_compute[185194]: 2026-01-31 09:58:43.441 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:43 compute-0 podman[238435]: 2026-01-31 09:58:43.687536285 +0000 UTC m=+0.101580039 container create 93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 09:58:43 compute-0 podman[238435]: 2026-01-31 09:58:43.643515555 +0000 UTC m=+0.057559389 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:58:43 compute-0 systemd[1]: Started libpod-conmon-93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f.scope.
Jan 31 09:58:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3c331042a0a955f3ab4288a226a884f97cb5fe6d67f0c0dcdd13159df7bce1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:58:43 compute-0 podman[238435]: 2026-01-31 09:58:43.803778032 +0000 UTC m=+0.217821856 container init 93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 09:58:43 compute-0 podman[238435]: 2026-01-31 09:58:43.812095265 +0000 UTC m=+0.226139049 container start 93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 09:58:43 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [NOTICE]   (238454) : New worker (238456) forked
Jan 31 09:58:43 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [NOTICE]   (238454) : Loading success.
Jan 31 09:58:44 compute-0 podman[238465]: 2026-01-31 09:58:44.764625614 +0000 UTC m=+0.085778027 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 09:58:46 compute-0 nova_compute[185194]: 2026-01-31 09:58:46.590 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:46 compute-0 podman[238489]: 2026-01-31 09:58:46.960566109 +0000 UTC m=+0.084003838 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=kepler, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 31 09:58:46 compute-0 podman[238490]: 2026-01-31 09:58:46.98675808 +0000 UTC m=+0.104680845 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 09:58:48 compute-0 nova_compute[185194]: 2026-01-31 09:58:48.444 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:50 compute-0 ovn_controller[97627]: 2026-01-31T09:58:50Z|00032|binding|INFO|Releasing lport aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49 from this chassis (sb_readonly=0)
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0500] manager: (patch-br-int-to-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.048 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0549] device (patch-br-int-to-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <warn>  [1769853530.0554] device (patch-br-int-to-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.063 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:50 compute-0 ovn_controller[97627]: 2026-01-31T09:58:50Z|00033|binding|INFO|Releasing lport aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49 from this chassis (sb_readonly=0)
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0683] manager: (patch-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0724] device (patch-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <warn>  [1769853530.0726] device (patch-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0788] manager: (patch-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0851] manager: (patch-br-int-to-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.086 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0885] device (patch-br-int-to-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 09:58:50 compute-0 NetworkManager[56281]: <info>  [1769853530.0916] device (patch-provnet-88e40f00-8bfa-49d1-96e5-fbb6bbd34bbe-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.363 185198 DEBUG nova.compute.manager [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-changed-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.364 185198 DEBUG nova.compute.manager [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Refreshing instance network info cache due to event network-changed-fea1caad-7786-4490-a707-f79cc6ff5fef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.364 185198 DEBUG oslo_concurrency.lockutils [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.365 185198 DEBUG oslo_concurrency.lockutils [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:58:50 compute-0 nova_compute[185194]: 2026-01-31 09:58:50.366 185198 DEBUG nova.network.neutron [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Refreshing network info cache for port fea1caad-7786-4490-a707-f79cc6ff5fef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:58:51 compute-0 nova_compute[185194]: 2026-01-31 09:58:51.565 185198 DEBUG nova.network.neutron [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Updated VIF entry in instance network info cache for port fea1caad-7786-4490-a707-f79cc6ff5fef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:58:51 compute-0 nova_compute[185194]: 2026-01-31 09:58:51.566 185198 DEBUG nova.network.neutron [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:58:51 compute-0 nova_compute[185194]: 2026-01-31 09:58:51.588 185198 DEBUG oslo_concurrency.lockutils [req-7e3d7dbe-f99f-4f82-aebb-3e3e6947c58a req-00d13459-b23c-48d0-ad63-b1955565e445 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:58:51 compute-0 nova_compute[185194]: 2026-01-31 09:58:51.592 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:53 compute-0 nova_compute[185194]: 2026-01-31 09:58:53.446 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:55 compute-0 podman[238528]: 2026-01-31 09:58:55.987277814 +0000 UTC m=+0.110858718 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 09:58:56 compute-0 nova_compute[185194]: 2026-01-31 09:58:56.594 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:58 compute-0 nova_compute[185194]: 2026-01-31 09:58:58.448 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:58:59 compute-0 podman[201068]: time="2026-01-31T09:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:58:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 09:58:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 31 09:59:00 compute-0 podman[238553]: 2026-01-31 09:59:00.970481642 +0000 UTC m=+0.098220005 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1769056855, name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 09:59:01 compute-0 openstack_network_exporter[204162]: ERROR   09:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:59:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:59:01 compute-0 openstack_network_exporter[204162]: ERROR   09:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:59:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:59:01 compute-0 nova_compute[185194]: 2026-01-31 09:59:01.599 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:02 compute-0 podman[238573]: 2026-01-31 09:59:02.967824828 +0000 UTC m=+0.094082699 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:59:03 compute-0 nova_compute[185194]: 2026-01-31 09:59:03.451 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:06 compute-0 nova_compute[185194]: 2026-01-31 09:59:06.601 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:08 compute-0 nova_compute[185194]: 2026-01-31 09:59:08.453 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:10 compute-0 podman[238591]: 2026-01-31 09:59:10.012916496 +0000 UTC m=+0.137972224 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:59:10 compute-0 podman[238592]: 2026-01-31 09:59:10.013456341 +0000 UTC m=+0.135788063 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0)
Jan 31 09:59:11 compute-0 nova_compute[185194]: 2026-01-31 09:59:11.604 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:13 compute-0 ovn_controller[97627]: 2026-01-31T09:59:13Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:7b:f6 192.168.0.202
Jan 31 09:59:13 compute-0 ovn_controller[97627]: 2026-01-31T09:59:13Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:7b:f6 192.168.0.202
Jan 31 09:59:13 compute-0 nova_compute[185194]: 2026-01-31 09:59:13.456 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:14 compute-0 podman[238656]: 2026-01-31 09:59:14.976553199 +0000 UTC m=+0.085592602 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 09:59:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:16.423 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:16.423 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:16.424 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:16 compute-0 nova_compute[185194]: 2026-01-31 09:59:16.606 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:17 compute-0 podman[238680]: 2026-01-31 09:59:17.988783267 +0000 UTC m=+0.112039062 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:59:17 compute-0 podman[238679]: 2026-01-31 09:59:17.994511067 +0000 UTC m=+0.119146200 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, config_id=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 31 09:59:18 compute-0 nova_compute[185194]: 2026-01-31 09:59:18.459 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:20 compute-0 ovn_controller[97627]: 2026-01-31T09:59:20Z|00034|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 31 09:59:21 compute-0 nova_compute[185194]: 2026-01-31 09:59:21.610 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:23 compute-0 nova_compute[185194]: 2026-01-31 09:59:23.461 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:26 compute-0 nova_compute[185194]: 2026-01-31 09:59:26.612 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:26 compute-0 podman[238718]: 2026-01-31 09:59:26.752408395 +0000 UTC m=+0.118050099 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 09:59:28 compute-0 nova_compute[185194]: 2026-01-31 09:59:28.464 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:28 compute-0 nova_compute[185194]: 2026-01-31 09:59:28.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:28 compute-0 nova_compute[185194]: 2026-01-31 09:59:28.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:59:28 compute-0 nova_compute[185194]: 2026-01-31 09:59:28.622 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:59:29 compute-0 podman[201068]: time="2026-01-31T09:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:59:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 09:59:29 compute-0 podman[201068]: @ - - [31/Jan/2026:09:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 31 09:59:31 compute-0 openstack_network_exporter[204162]: ERROR   09:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 09:59:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:59:31 compute-0 openstack_network_exporter[204162]: ERROR   09:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 09:59:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 09:59:31 compute-0 nova_compute[185194]: 2026-01-31 09:59:31.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:31 compute-0 nova_compute[185194]: 2026-01-31 09:59:31.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:59:31 compute-0 nova_compute[185194]: 2026-01-31 09:59:31.614 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:31 compute-0 podman[238742]: 2026-01-31 09:59:31.974948457 +0000 UTC m=+0.090627833 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, config_id=openstack_network_exporter, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal)
Jan 31 09:59:33 compute-0 nova_compute[185194]: 2026-01-31 09:59:33.466 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:33 compute-0 podman[238761]: 2026-01-31 09:59:33.937797761 +0000 UTC m=+0.056414267 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:59:34 compute-0 nova_compute[185194]: 2026-01-31 09:59:34.618 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:35 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:35.280 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:59:35 compute-0 nova_compute[185194]: 2026-01-31 09:59:35.281 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:35 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:35.283 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.619 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.643 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.645 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.741 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.791 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.792 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.873 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.876 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.965 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:36 compute-0 nova_compute[185194]: 2026-01-31 09:59:36.965 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.026 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.357 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.358 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5271MB free_disk=72.40414047241211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.359 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.359 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.629 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.629 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.630 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.786 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.809 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.852 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:59:37 compute-0 nova_compute[185194]: 2026-01-31 09:59:37.852 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:38 compute-0 nova_compute[185194]: 2026-01-31 09:59:38.469 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:38 compute-0 nova_compute[185194]: 2026-01-31 09:59:38.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:38 compute-0 nova_compute[185194]: 2026-01-31 09:59:38.633 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:38 compute-0 nova_compute[185194]: 2026-01-31 09:59:38.633 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:59:38 compute-0 nova_compute[185194]: 2026-01-31 09:59:38.634 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:59:39 compute-0 nova_compute[185194]: 2026-01-31 09:59:39.076 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:59:39 compute-0 nova_compute[185194]: 2026-01-31 09:59:39.078 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:59:39 compute-0 nova_compute[185194]: 2026-01-31 09:59:39.078 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 09:59:39 compute-0 nova_compute[185194]: 2026-01-31 09:59:39.078 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:59:41 compute-0 podman[238793]: 2026-01-31 09:59:41.001823722 +0000 UTC m=+0.109816645 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 09:59:41 compute-0 podman[238792]: 2026-01-31 09:59:41.034933417 +0000 UTC m=+0.143097374 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:59:41 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:41.286 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.622 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.681 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.683 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.704 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.787 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.788 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.799 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.800 185198 INFO nova.compute.claims [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.882 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.898 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.899 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.900 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.900 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.900 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.901 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.901 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.902 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.903 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.903 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.962 185198 DEBUG nova.compute.provider_tree [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:59:41 compute-0 nova_compute[185194]: 2026-01-31 09:59:41.978 185198 DEBUG nova.scheduler.client.report [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.010 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.011 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.064 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.066 185198 DEBUG nova.network.neutron [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.089 185198 INFO nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.141 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.241 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.242 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.243 185198 INFO nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Creating image(s)
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.243 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.244 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.245 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.258 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.309 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.312 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.312 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.331 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.383 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.384 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.422 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.424 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.425 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.487 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.488 185198 DEBUG nova.virt.disk.api [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Checking if we can resize image /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.489 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.545 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.546 185198 DEBUG nova.virt.disk.api [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Cannot resize image /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.547 185198 DEBUG nova.objects.instance [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'migration_context' on Instance uuid 11b288d2-4ade-4790-8f82-165b662f9a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.565 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.565 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.566 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.584 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.638 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.639 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.640 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.657 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.709 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.713 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.769 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.770 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.771 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.820 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.821 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.821 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Ensure instance console log exists: /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.822 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.823 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:42 compute-0 nova_compute[185194]: 2026-01-31 09:59:42.823 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.342 185198 DEBUG nova.network.neutron [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Successfully updated port: c6014353-db88-4d66-9154-67869a227159 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.385 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.386 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.386 185198 DEBUG nova.network.neutron [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.471 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.541 185198 DEBUG nova.network.neutron [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.873 185198 DEBUG nova.compute.manager [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-changed-c6014353-db88-4d66-9154-67869a227159 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.875 185198 DEBUG nova.compute.manager [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Refreshing instance network info cache due to event network-changed-c6014353-db88-4d66-9154-67869a227159. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:59:43 compute-0 nova_compute[185194]: 2026-01-31 09:59:43.876 185198 DEBUG oslo_concurrency.lockutils [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:59:44 compute-0 nova_compute[185194]: 2026-01-31 09:59:44.530 185198 DEBUG nova.network.neutron [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:59:44 compute-0 nova_compute[185194]: 2026-01-31 09:59:44.978 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:59:44 compute-0 nova_compute[185194]: 2026-01-31 09:59:44.979 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance network_info: |[{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:59:44 compute-0 nova_compute[185194]: 2026-01-31 09:59:44.981 185198 DEBUG oslo_concurrency.lockutils [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:59:44 compute-0 nova_compute[185194]: 2026-01-31 09:59:44.982 185198 DEBUG nova.network.neutron [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Refreshing network info cache for port c6014353-db88-4d66-9154-67869a227159 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:59:44 compute-0 nova_compute[185194]: 2026-01-31 09:59:44.987 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Start _get_guest_xml network_info=[{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 1, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.000 185198 WARNING nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.008 185198 DEBUG nova.virt.libvirt.host [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.009 185198 DEBUG nova.virt.libvirt.host [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.014 185198 DEBUG nova.virt.libvirt.host [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.014 185198 DEBUG nova.virt.libvirt.host [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.015 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.015 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T09:57:04Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='5ace5526-788a-41cf-9e40-e75da8858688',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.016 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.016 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.016 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.017 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.017 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.017 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.018 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.018 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.018 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.019 185198 DEBUG nova.virt.hardware [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.022 185198 DEBUG nova.virt.libvirt.vif [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:59:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq',id=2,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-5t72byzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:59:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzA3MDIzNDQ5MDAzODYwNTgwNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 31 09:59:45 compute-0 nova_compute[185194]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzA3MDIzNDQ5MDAzODYwNTgwNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=11b288d2-4ade-4790-8f82-165b662f9a1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.022 185198 DEBUG nova.network.os_vif_util [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.023 185198 DEBUG nova.network.os_vif_util [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.024 185198 DEBUG nova.objects.instance [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11b288d2-4ade-4790-8f82-165b662f9a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.084 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <uuid>11b288d2-4ade-4790-8f82-165b662f9a1e</uuid>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <name>instance-00000002</name>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <memory>524288</memory>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <metadata>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:name>vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq</nova:name>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 09:59:45</nova:creationTime>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:flavor name="m1.small">
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:memory>512</nova:memory>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:ephemeral>1</nova:ephemeral>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:user uuid="d3342a7282114996b6010246d4ade24e">admin</nova:user>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:project uuid="155389cbed6644acacdbeeb6155adb54">admin</nova:project>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="8b57d666-88c0-4e62-a76a-0d45801ca1a6"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         <nova:port uuid="c6014353-db88-4d66-9154-67869a227159">
Jan 31 09:59:45 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="192.168.0.231" ipVersion="4"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </metadata>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <system>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <entry name="serial">11b288d2-4ade-4790-8f82-165b662f9a1e</entry>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <entry name="uuid">11b288d2-4ade-4790-8f82-165b662f9a1e</entry>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </system>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <os>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </os>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <features>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <apic/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </features>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </clock>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </cpu>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   <devices>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <target dev="vdb" bus="virtio"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.config"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </disk>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:7d:ae:e7"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <target dev="tapc6014353-db"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </interface>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/console.log" append="off"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </serial>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <video>
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </video>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </rng>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 09:59:45 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 09:59:45 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 09:59:45 compute-0 nova_compute[185194]:   </devices>
Jan 31 09:59:45 compute-0 nova_compute[185194]: </domain>
Jan 31 09:59:45 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.085 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Preparing to wait for external event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.086 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.086 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.086 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.087 185198 DEBUG nova.virt.libvirt.vif [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:59:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq',id=2,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-5t72byzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:59:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzA3MDIzNDQ5MDAzODYwNTgwNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 31 09:59:45 compute-0 nova_compute[185194]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzA3MDIzNDQ5MDAzODYwNTgwNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=11b288d2-4ade-4790-8f82-165b662f9a1e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.087 185198 DEBUG nova.network.os_vif_util [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.088 185198 DEBUG nova.network.os_vif_util [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.088 185198 DEBUG os_vif [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.089 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.089 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.090 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.093 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.093 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6014353-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.094 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6014353-db, col_values=(('external_ids', {'iface-id': 'c6014353-db88-4d66-9154-67869a227159', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:ae:e7', 'vm-uuid': '11b288d2-4ade-4790-8f82-165b662f9a1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.095 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 NetworkManager[56281]: <info>  [1769853585.0979] manager: (tapc6014353-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.099 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.106 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.108 185198 INFO os_vif [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db')
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.158 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.159 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.159 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.160 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No VIF found with MAC fa:16:3e:7d:ae:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.161 185198 INFO nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Using config drive
Jan 31 09:59:45 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 09:59:45.022 185198 DEBUG nova.virt.libvirt.vif [None req-6a654150-3a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 09:59:45 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 09:59:45.087 185198 DEBUG nova.virt.libvirt.vif [None req-6a654150-3a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.429 185198 INFO nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Creating config drive at /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.config
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.436 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbpjnb5z_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.560 185198 DEBUG oslo_concurrency.processutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbpjnb5z_" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:59:45 compute-0 kernel: tapc6014353-db: entered promiscuous mode
Jan 31 09:59:45 compute-0 ovn_controller[97627]: 2026-01-31T09:59:45Z|00035|binding|INFO|Claiming lport c6014353-db88-4d66-9154-67869a227159 for this chassis.
Jan 31 09:59:45 compute-0 ovn_controller[97627]: 2026-01-31T09:59:45Z|00036|binding|INFO|c6014353-db88-4d66-9154-67869a227159: Claiming fa:16:3e:7d:ae:e7 192.168.0.231
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.657 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 ovn_controller[97627]: 2026-01-31T09:59:45Z|00037|binding|INFO|Setting lport c6014353-db88-4d66-9154-67869a227159 ovn-installed in OVS
Jan 31 09:59:45 compute-0 ovn_controller[97627]: 2026-01-31T09:59:45Z|00038|binding|INFO|Setting lport c6014353-db88-4d66-9154-67869a227159 up in Southbound
Jan 31 09:59:45 compute-0 NetworkManager[56281]: <info>  [1769853585.6718] manager: (tapc6014353-db): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.672 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:ae:e7 192.168.0.231'], port_security=['fa:16:3e:7d:ae:e7 192.168.0.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wbazt7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-port-h5agd3bpn43q', 'neutron:cidrs': '192.168.0.231/24', 'neutron:device_id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wbazt7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-port-h5agd3bpn43q', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=c6014353-db88-4d66-9154-67869a227159) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.675 106883 INFO neutron.agent.ovn.metadata.agent [-] Port c6014353-db88-4d66-9154-67869a227159 in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b bound to our chassis
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.678 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.667 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.675 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.696 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3c363835-da73-4c2c-8cd7-12b8f82a1a42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:59:45 compute-0 systemd-machined[156556]: New machine qemu-2-instance-00000002.
Jan 31 09:59:45 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.733 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[aac17a90-7749-4b4f-b745-bf35c507745c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.737 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[50bf8d6d-d27d-420c-a9b2-ce77d55b8d19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:59:45 compute-0 podman[238874]: 2026-01-31 09:59:45.741360457 +0000 UTC m=+0.107993709 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 09:59:45 compute-0 systemd-udevd[238910]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:59:45 compute-0 NetworkManager[56281]: <info>  [1769853585.7607] device (tapc6014353-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.759 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd89c20-d158-4695-a455-a8206cbfc750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:59:45 compute-0 NetworkManager[56281]: <info>  [1769853585.7671] device (tapc6014353-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.774 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[da105825-ec9d-4148-8e0d-94360574ffc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 36203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238919, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.786 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[b04207bb-afd7-4eb9-b263-7f5f203da697]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374469, 'tstamp': 374469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238920, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374472, 'tstamp': 374472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238920, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.787 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.789 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 nova_compute[185194]: 2026-01-31 09:59:45.791 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.791 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.792 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.792 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:59:45 compute-0 ovn_metadata_agent[106878]: 2026-01-31 09:59:45.793 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.147 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853586.1458325, 11b288d2-4ade-4790-8f82-165b662f9a1e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.149 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] VM Started (Lifecycle Event)
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.175 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.181 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853586.1460109, 11b288d2-4ade-4790-8f82-165b662f9a1e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.181 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] VM Paused (Lifecycle Event)
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.200 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.211 185198 DEBUG nova.compute.manager [req-7a5d5413-ae4b-4f9c-854a-c8374aa083f9 req-c9c6c853-8eca-4dd0-9378-c20437767a9f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.211 185198 DEBUG oslo_concurrency.lockutils [req-7a5d5413-ae4b-4f9c-854a-c8374aa083f9 req-c9c6c853-8eca-4dd0-9378-c20437767a9f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.211 185198 DEBUG oslo_concurrency.lockutils [req-7a5d5413-ae4b-4f9c-854a-c8374aa083f9 req-c9c6c853-8eca-4dd0-9378-c20437767a9f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.211 185198 DEBUG oslo_concurrency.lockutils [req-7a5d5413-ae4b-4f9c-854a-c8374aa083f9 req-c9c6c853-8eca-4dd0-9378-c20437767a9f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.212 185198 DEBUG nova.compute.manager [req-7a5d5413-ae4b-4f9c-854a-c8374aa083f9 req-c9c6c853-8eca-4dd0-9378-c20437767a9f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Processing event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.212 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.214 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.221 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.226 185198 INFO nova.virt.libvirt.driver [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance spawned successfully.
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.226 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.253 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.254 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853586.2174067, 11b288d2-4ade-4790-8f82-165b662f9a1e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.254 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] VM Resumed (Lifecycle Event)
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.270 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.270 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.271 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.271 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.271 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.272 185198 DEBUG nova.virt.libvirt.driver [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.325 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.334 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.360 185198 INFO nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Took 4.12 seconds to spawn the instance on the hypervisor.
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.361 185198 DEBUG nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.412 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.435 185198 INFO nova.compute.manager [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Took 4.67 seconds to build instance.
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.451 185198 DEBUG oslo_concurrency.lockutils [None req-6a654150-3a5d-4460-8060-166ab3dfa250 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.463 185198 DEBUG nova.network.neutron [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated VIF entry in instance network info cache for port c6014353-db88-4d66-9154-67869a227159. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.463 185198 DEBUG nova.network.neutron [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.482 185198 DEBUG oslo_concurrency.lockutils [req-24508c4e-f11f-4aad-ae0e-c1476e411a85 req-8654c9b3-362e-4aa2-a48d-39fd0d6517a6 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:59:46 compute-0 nova_compute[185194]: 2026-01-31 09:59:46.625 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:48 compute-0 nova_compute[185194]: 2026-01-31 09:59:48.333 185198 DEBUG nova.compute.manager [req-a2ce5b51-7604-4920-8d03-b4965a2bea23 req-8d435811-f183-4594-ad41-70508c2d0cc5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:59:48 compute-0 nova_compute[185194]: 2026-01-31 09:59:48.334 185198 DEBUG oslo_concurrency.lockutils [req-a2ce5b51-7604-4920-8d03-b4965a2bea23 req-8d435811-f183-4594-ad41-70508c2d0cc5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:59:48 compute-0 nova_compute[185194]: 2026-01-31 09:59:48.334 185198 DEBUG oslo_concurrency.lockutils [req-a2ce5b51-7604-4920-8d03-b4965a2bea23 req-8d435811-f183-4594-ad41-70508c2d0cc5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:59:48 compute-0 nova_compute[185194]: 2026-01-31 09:59:48.334 185198 DEBUG oslo_concurrency.lockutils [req-a2ce5b51-7604-4920-8d03-b4965a2bea23 req-8d435811-f183-4594-ad41-70508c2d0cc5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:59:48 compute-0 nova_compute[185194]: 2026-01-31 09:59:48.334 185198 DEBUG nova.compute.manager [req-a2ce5b51-7604-4920-8d03-b4965a2bea23 req-8d435811-f183-4594-ad41-70508c2d0cc5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] No waiting events found dispatching network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:59:48 compute-0 nova_compute[185194]: 2026-01-31 09:59:48.334 185198 WARNING nova.compute.manager [req-a2ce5b51-7604-4920-8d03-b4965a2bea23 req-8d435811-f183-4594-ad41-70508c2d0cc5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received unexpected event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 for instance with vm_state active and task_state None.
Jan 31 09:59:48 compute-0 podman[238930]: 2026-01-31 09:59:48.98259555 +0000 UTC m=+0.096479512 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 31 09:59:48 compute-0 podman[238929]: 2026-01-31 09:59:48.985839871 +0000 UTC m=+0.099657772 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Jan 31 09:59:50 compute-0 nova_compute[185194]: 2026-01-31 09:59:50.097 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:51 compute-0 nova_compute[185194]: 2026-01-31 09:59:51.627 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:55 compute-0 nova_compute[185194]: 2026-01-31 09:59:55.100 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:56 compute-0 nova_compute[185194]: 2026-01-31 09:59:56.632 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:59:56 compute-0 podman[238970]: 2026-01-31 09:59:56.983018987 +0000 UTC m=+0.101337244 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 09:59:59 compute-0 podman[201068]: time="2026-01-31T09:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 09:59:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 09:59:59 compute-0 podman[201068]: @ - - [31/Jan/2026:09:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 31 10:00:00 compute-0 nova_compute[185194]: 2026-01-31 10:00:00.103 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:01 compute-0 openstack_network_exporter[204162]: ERROR   10:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:00:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:00:01 compute-0 openstack_network_exporter[204162]: ERROR   10:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:00:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:00:01 compute-0 nova_compute[185194]: 2026-01-31 10:00:01.663 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:02 compute-0 podman[238994]: 2026-01-31 10:00:02.961508177 +0000 UTC m=+0.091077648 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-type=git, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9/ubi-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7)
Jan 31 10:00:04 compute-0 podman[239016]: 2026-01-31 10:00:04.934346963 +0000 UTC m=+0.060654221 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 10:00:05 compute-0 nova_compute[185194]: 2026-01-31 10:00:05.107 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.635 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.680 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.706 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid a6212880-427f-4876-8598-06909416bde1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.707 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid 11b288d2-4ade-4790-8f82-165b662f9a1e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.708 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "a6212880-427f-4876-8598-06909416bde1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.709 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a6212880-427f-4876-8598-06909416bde1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.710 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.711 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.752 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a6212880-427f-4876-8598-06909416bde1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:00:06 compute-0 nova_compute[185194]: 2026-01-31 10:00:06.753 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:00:10 compute-0 nova_compute[185194]: 2026-01-31 10:00:10.109 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:11 compute-0 nova_compute[185194]: 2026-01-31 10:00:11.637 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:11 compute-0 podman[239036]: 2026-01-31 10:00:11.951173023 +0000 UTC m=+0.073377968 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:00:11 compute-0 podman[239035]: 2026-01-31 10:00:11.974573276 +0000 UTC m=+0.099619232 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:00:15 compute-0 nova_compute[185194]: 2026-01-31 10:00:15.113 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:15 compute-0 ovn_controller[97627]: 2026-01-31T10:00:15Z|00039|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 31 10:00:15 compute-0 podman[239080]: 2026-01-31 10:00:15.955528535 +0000 UTC m=+0.079867069 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:00:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:00:16.424 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:00:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:00:16.425 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:00:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:00:16.425 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:00:16 compute-0 nova_compute[185194]: 2026-01-31 10:00:16.642 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:18 compute-0 ovn_controller[97627]: 2026-01-31T10:00:18Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:ae:e7 192.168.0.231
Jan 31 10:00:18 compute-0 ovn_controller[97627]: 2026-01-31T10:00:18Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:ae:e7 192.168.0.231
Jan 31 10:00:19 compute-0 podman[239117]: 2026-01-31 10:00:19.968866091 +0000 UTC m=+0.086087865 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, name=ubi9, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9)
Jan 31 10:00:19 compute-0 podman[239118]: 2026-01-31 10:00:19.983082355 +0000 UTC m=+0.097652613 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 10:00:20 compute-0 nova_compute[185194]: 2026-01-31 10:00:20.117 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:21 compute-0 nova_compute[185194]: 2026-01-31 10:00:21.644 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:25 compute-0 nova_compute[185194]: 2026-01-31 10:00:25.121 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:26 compute-0 nova_compute[185194]: 2026-01-31 10:00:26.646 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:27 compute-0 podman[239154]: 2026-01-31 10:00:27.938511532 +0000 UTC m=+0.061787060 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:00:29 compute-0 podman[201068]: time="2026-01-31T10:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:00:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:00:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 31 10:00:30 compute-0 nova_compute[185194]: 2026-01-31 10:00:30.124 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.697 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.697 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.697 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.705 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a6212880-427f-4876-8598-06909416bde1 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.112 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a6212880-427f-4876-8598-06909416bde1 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:00:31 compute-0 openstack_network_exporter[204162]: ERROR   10:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:00:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:00:31 compute-0 openstack_network_exporter[204162]: ERROR   10:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:00:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:00:31 compute-0 nova_compute[185194]: 2026-01-31 10:00:31.649 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.650 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Sat, 31 Jan 2026 10:00:31 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4280bbdf-6758-4fbb-9ecb-3889772f4f3d x-openstack-request-id: req-4280bbdf-6758-4fbb-9ecb-3889772f4f3d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.651 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a6212880-427f-4876-8598-06909416bde1", "name": "test_0", "status": "ACTIVE", "tenant_id": "155389cbed6644acacdbeeb6155adb54", "user_id": "d3342a7282114996b6010246d4ade24e", "metadata": {}, "hostId": "67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d", "image": {"id": "8b57d666-88c0-4e62-a76a-0d45801ca1a6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8b57d666-88c0-4e62-a76a-0d45801ca1a6"}]}, "flavor": {"id": "5ace5526-788a-41cf-9e40-e75da8858688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/5ace5526-788a-41cf-9e40-e75da8858688"}]}, "created": "2026-01-31T09:58:29Z", "updated": "2026-01-31T09:58:40Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.202", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:7b:f6"}, {"version": 4, "addr": "192.168.122.213", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:e0:7b:f6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a6212880-427f-4876-8598-06909416bde1"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a6212880-427f-4876-8598-06909416bde1"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-31T09:58:40.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.651 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a6212880-427f-4876-8598-06909416bde1 used request id req-4280bbdf-6758-4fbb-9ecb-3889772f4f3d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.654 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.657 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 11b288d2-4ade-4790-8f82-165b662f9a1e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:00:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:31.659 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/11b288d2-4ade-4790-8f82-165b662f9a1e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.395 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Sat, 31 Jan 2026 10:00:31 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b8e75278-77ed-488f-b4dd-6bbade2d7d57 x-openstack-request-id: req-b8e75278-77ed-488f-b4dd-6bbade2d7d57 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.396 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "11b288d2-4ade-4790-8f82-165b662f9a1e", "name": "vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq", "status": "ACTIVE", "tenant_id": "155389cbed6644acacdbeeb6155adb54", "user_id": "d3342a7282114996b6010246d4ade24e", "metadata": {"metering.server_group": "cd99fa32-2992-4cd0-a9a0-648127ea67dc"}, "hostId": "67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d", "image": {"id": "8b57d666-88c0-4e62-a76a-0d45801ca1a6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8b57d666-88c0-4e62-a76a-0d45801ca1a6"}]}, "flavor": {"id": "5ace5526-788a-41cf-9e40-e75da8858688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/5ace5526-788a-41cf-9e40-e75da8858688"}]}, "created": "2026-01-31T09:59:39Z", "updated": "2026-01-31T09:59:46Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.231", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7d:ae:e7"}, {"version": 4, "addr": "192.168.122.222", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:7d:ae:e7"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/11b288d2-4ade-4790-8f82-165b662f9a1e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/11b288d2-4ade-4790-8f82-165b662f9a1e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-31T09:59:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.396 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/11b288d2-4ade-4790-8f82-165b662f9a1e used request id req-b8e75278-77ed-488f-b4dd-6bbade2d7d57 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.398 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.398 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.401 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:00:32.399435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:00:32.402156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.406 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a6212880-427f-4876-8598-06909416bde1 / tapfea1caad-77 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.406 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.410 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 11b288d2-4ade-4790-8f82-165b662f9a1e / tapc6014353-db inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.410 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.412 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:00:32.411284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:00:32.412416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.439 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.439 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.440 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.470 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.471 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.471 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.472 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.473 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:00:32.473972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.532 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.532 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.533 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.582 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.583 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.583 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.584 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:00:32.584457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.604 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.96484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.622 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.61328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.622 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.623 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.623 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.623 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:00:32.623087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.624 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 681952057 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.624 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.624 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:00:32.625474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.625 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.626 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:00:32.626628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.627 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.627 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.627 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.627 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:00:32.628566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.630 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.630 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.630 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.630 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.630 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.630 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:00:32.629905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.632 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-31T10:00:32.631909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.632 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq>]
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:00:32.633133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:00:32.634394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.635 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:00:32.635592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.636 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.637 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.637 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.638 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 33710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:00:32.636830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.638 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 30800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:00:32.638063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:00:32.639655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 1821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.640 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:00:32.640784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.642 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.642 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:00:32.641998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.642 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.643 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.643 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.643 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-31T10:00:32.644394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq>]
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.645 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4356417496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.646 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.646 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:00:32.645167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.647 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.648 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.649 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:00:32.647450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.649 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.649 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:00:32.649010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.650 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.650 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.650 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.651 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.652 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.652 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.652 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:00:32.651719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.653 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.653 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.653 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.654 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.654 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.654 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:00:32.654282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.655 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.656 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:00:32.657 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:00:33 compute-0 podman[239180]: 2026-01-31 10:00:33.95992595 +0000 UTC m=+0.088388011 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter)
Jan 31 10:00:35 compute-0 nova_compute[185194]: 2026-01-31 10:00:35.128 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:35 compute-0 podman[239202]: 2026-01-31 10:00:35.993017864 +0000 UTC m=+0.103458637 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 10:00:36 compute-0 nova_compute[185194]: 2026-01-31 10:00:36.630 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:36 compute-0 nova_compute[185194]: 2026-01-31 10:00:36.652 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:37 compute-0 nova_compute[185194]: 2026-01-31 10:00:37.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:38 compute-0 nova_compute[185194]: 2026-01-31 10:00:38.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:38 compute-0 nova_compute[185194]: 2026-01-31 10:00:38.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:00:38 compute-0 nova_compute[185194]: 2026-01-31 10:00:38.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:00:39 compute-0 nova_compute[185194]: 2026-01-31 10:00:39.254 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:00:39 compute-0 nova_compute[185194]: 2026-01-31 10:00:39.254 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:00:39 compute-0 nova_compute[185194]: 2026-01-31 10:00:39.255 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:00:39 compute-0 nova_compute[185194]: 2026-01-31 10:00:39.255 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.132 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.743 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.891 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.892 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.893 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.893 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.894 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.895 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.896 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:00:40 compute-0 nova_compute[185194]: 2026-01-31 10:00:40.896 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.020 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.021 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.022 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.022 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.260 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.348 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.349 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.427 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.428 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.508 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.510 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.579 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.588 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.656 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.667 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.668 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.732 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.733 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.796 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.797 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:00:41 compute-0 nova_compute[185194]: 2026-01-31 10:00:41.866 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.278 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.279 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5083MB free_disk=72.38211822509766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.279 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.279 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.517 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.518 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.518 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.518 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.651 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.688 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.735 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:00:42 compute-0 nova_compute[185194]: 2026-01-31 10:00:42.735 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:00:42 compute-0 podman[239247]: 2026-01-31 10:00:42.961059727 +0000 UTC m=+0.078094725 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true)
Jan 31 10:00:42 compute-0 podman[239246]: 2026-01-31 10:00:42.993509155 +0000 UTC m=+0.111942358 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true)
Jan 31 10:00:44 compute-0 nova_compute[185194]: 2026-01-31 10:00:44.449 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:00:45 compute-0 nova_compute[185194]: 2026-01-31 10:00:45.136 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:46 compute-0 nova_compute[185194]: 2026-01-31 10:00:46.656 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:46 compute-0 podman[239289]: 2026-01-31 10:00:46.962472568 +0000 UTC m=+0.082534598 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:00:50 compute-0 nova_compute[185194]: 2026-01-31 10:00:50.140 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:50 compute-0 podman[239315]: 2026-01-31 10:00:50.971340066 +0000 UTC m=+0.072244816 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 31 10:00:50 compute-0 podman[239314]: 2026-01-31 10:00:50.979048384 +0000 UTC m=+0.089384265 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Jan 31 10:00:51 compute-0 nova_compute[185194]: 2026-01-31 10:00:51.659 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:55 compute-0 nova_compute[185194]: 2026-01-31 10:00:55.144 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:56 compute-0 nova_compute[185194]: 2026-01-31 10:00:56.662 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:00:58 compute-0 podman[239355]: 2026-01-31 10:00:58.959349457 +0000 UTC m=+0.074917342 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:00:59 compute-0 podman[201068]: time="2026-01-31T10:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:00:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:00:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Jan 31 10:01:00 compute-0 nova_compute[185194]: 2026-01-31 10:01:00.147 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:01 compute-0 openstack_network_exporter[204162]: ERROR   10:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:01:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:01:01 compute-0 openstack_network_exporter[204162]: ERROR   10:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:01:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:01:01 compute-0 nova_compute[185194]: 2026-01-31 10:01:01.664 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:01 compute-0 CROND[239379]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 10:01:01 compute-0 run-parts[239382]: (/etc/cron.hourly) starting 0anacron
Jan 31 10:01:01 compute-0 run-parts[239388]: (/etc/cron.hourly) finished 0anacron
Jan 31 10:01:01 compute-0 CROND[239378]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 10:01:04 compute-0 podman[239389]: 2026-01-31 10:01:04.988606313 +0000 UTC m=+0.108793859 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1769056855, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 10:01:05 compute-0 nova_compute[185194]: 2026-01-31 10:01:05.152 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:06 compute-0 nova_compute[185194]: 2026-01-31 10:01:06.668 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:06 compute-0 podman[239412]: 2026-01-31 10:01:06.94295642 +0000 UTC m=+0.063573584 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:01:10 compute-0 nova_compute[185194]: 2026-01-31 10:01:10.157 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:11 compute-0 nova_compute[185194]: 2026-01-31 10:01:11.670 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:13 compute-0 podman[239432]: 2026-01-31 10:01:13.971194872 +0000 UTC m=+0.096868327 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 10:01:13 compute-0 podman[239431]: 2026-01-31 10:01:13.980441588 +0000 UTC m=+0.110100761 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 10:01:15 compute-0 nova_compute[185194]: 2026-01-31 10:01:15.165 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:01:16.425 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:01:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:01:16.426 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:01:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:01:16.426 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:01:16 compute-0 nova_compute[185194]: 2026-01-31 10:01:16.673 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:17 compute-0 podman[239477]: 2026-01-31 10:01:17.960616074 +0000 UTC m=+0.078198832 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:01:20 compute-0 nova_compute[185194]: 2026-01-31 10:01:20.169 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:21 compute-0 nova_compute[185194]: 2026-01-31 10:01:21.676 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:21 compute-0 podman[239502]: 2026-01-31 10:01:21.956113645 +0000 UTC m=+0.074958872 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, config_id=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Jan 31 10:01:21 compute-0 podman[239503]: 2026-01-31 10:01:21.970419685 +0000 UTC m=+0.084294851 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 10:01:25 compute-0 nova_compute[185194]: 2026-01-31 10:01:25.172 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:26 compute-0 nova_compute[185194]: 2026-01-31 10:01:26.680 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:29 compute-0 podman[201068]: time="2026-01-31T10:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:01:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:01:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 31 10:01:29 compute-0 podman[239538]: 2026-01-31 10:01:29.991088992 +0000 UTC m=+0.104518664 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:01:30 compute-0 nova_compute[185194]: 2026-01-31 10:01:30.176 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:31 compute-0 openstack_network_exporter[204162]: ERROR   10:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:01:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:01:31 compute-0 openstack_network_exporter[204162]: ERROR   10:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:01:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:01:31 compute-0 nova_compute[185194]: 2026-01-31 10:01:31.682 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:35 compute-0 nova_compute[185194]: 2026-01-31 10:01:35.180 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:36 compute-0 podman[239562]: 2026-01-31 10:01:36.008116719 +0000 UTC m=+0.125436045 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 31 10:01:36 compute-0 nova_compute[185194]: 2026-01-31 10:01:36.684 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:37 compute-0 podman[239582]: 2026-01-31 10:01:37.964680468 +0000 UTC m=+0.086667589 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:01:38 compute-0 nova_compute[185194]: 2026-01-31 10:01:38.601 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:38 compute-0 nova_compute[185194]: 2026-01-31 10:01:38.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:38 compute-0 nova_compute[185194]: 2026-01-31 10:01:38.604 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:01:39 compute-0 nova_compute[185194]: 2026-01-31 10:01:39.316 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:01:39 compute-0 nova_compute[185194]: 2026-01-31 10:01:39.317 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:01:39 compute-0 nova_compute[185194]: 2026-01-31 10:01:39.317 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.183 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.876 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.897 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.897 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.898 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.898 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.898 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.899 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:40 compute-0 nova_compute[185194]: 2026-01-31 10:01:40.899 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.053 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.054 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.055 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.055 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.341 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.406 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.408 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.495 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.497 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.559 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.561 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.618 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.627 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.682 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.684 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.699 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.745 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.746 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.794 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.796 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:01:41 compute-0 nova_compute[185194]: 2026-01-31 10:01:41.839 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.190 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.191 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5067MB free_disk=72.38211822509766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.191 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.192 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.283 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.284 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.284 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.285 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.364 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.380 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.381 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:01:42 compute-0 nova_compute[185194]: 2026-01-31 10:01:42.382 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:01:43 compute-0 nova_compute[185194]: 2026-01-31 10:01:43.088 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:43 compute-0 nova_compute[185194]: 2026-01-31 10:01:43.113 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:43 compute-0 nova_compute[185194]: 2026-01-31 10:01:43.114 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:01:43 compute-0 nova_compute[185194]: 2026-01-31 10:01:43.115 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:01:44 compute-0 podman[239627]: 2026-01-31 10:01:44.769917131 +0000 UTC m=+0.088816761 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 31 10:01:44 compute-0 podman[239626]: 2026-01-31 10:01:44.779083005 +0000 UTC m=+0.101055040 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:01:45 compute-0 nova_compute[185194]: 2026-01-31 10:01:45.185 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:46 compute-0 nova_compute[185194]: 2026-01-31 10:01:46.689 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 10:01:48 compute-0 podman[239673]: 2026-01-31 10:01:48.967060259 +0000 UTC m=+0.088210766 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:01:50 compute-0 nova_compute[185194]: 2026-01-31 10:01:50.190 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:51 compute-0 nova_compute[185194]: 2026-01-31 10:01:51.692 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:52 compute-0 podman[239699]: 2026-01-31 10:01:52.955018959 +0000 UTC m=+0.070869309 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:01:52 compute-0 podman[239698]: 2026-01-31 10:01:52.965240479 +0000 UTC m=+0.083940079 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:01:55 compute-0 nova_compute[185194]: 2026-01-31 10:01:55.194 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:56 compute-0 nova_compute[185194]: 2026-01-31 10:01:56.694 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:01:59 compute-0 podman[201068]: time="2026-01-31T10:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:01:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:01:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 31 10:02:00 compute-0 nova_compute[185194]: 2026-01-31 10:02:00.197 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:00 compute-0 podman[239737]: 2026-01-31 10:02:00.956795482 +0000 UTC m=+0.072869398 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:02:01 compute-0 openstack_network_exporter[204162]: ERROR   10:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:02:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:02:01 compute-0 openstack_network_exporter[204162]: ERROR   10:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:02:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:02:01 compute-0 nova_compute[185194]: 2026-01-31 10:02:01.696 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:05 compute-0 nova_compute[185194]: 2026-01-31 10:02:05.200 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:06 compute-0 nova_compute[185194]: 2026-01-31 10:02:06.699 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:06 compute-0 podman[239761]: 2026-01-31 10:02:06.991498177 +0000 UTC m=+0.111730901 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 10:02:08 compute-0 podman[239781]: 2026-01-31 10:02:08.953474779 +0000 UTC m=+0.077159332 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 10:02:10 compute-0 nova_compute[185194]: 2026-01-31 10:02:10.203 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:11 compute-0 nova_compute[185194]: 2026-01-31 10:02:11.703 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:14 compute-0 podman[239799]: 2026-01-31 10:02:14.971282691 +0000 UTC m=+0.097674946 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:02:14 compute-0 podman[239800]: 2026-01-31 10:02:14.980072257 +0000 UTC m=+0.096658152 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:02:15 compute-0 nova_compute[185194]: 2026-01-31 10:02:15.205 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:02:16.426 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:02:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:02:16.427 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:02:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:02:16.427 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:02:16 compute-0 nova_compute[185194]: 2026-01-31 10:02:16.704 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:19 compute-0 podman[239844]: 2026-01-31 10:02:19.9634194 +0000 UTC m=+0.081639693 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:02:20 compute-0 nova_compute[185194]: 2026-01-31 10:02:20.210 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:21 compute-0 nova_compute[185194]: 2026-01-31 10:02:21.707 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:23 compute-0 podman[239873]: 2026-01-31 10:02:23.96301554 +0000 UTC m=+0.082289609 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 31 10:02:24 compute-0 podman[239872]: 2026-01-31 10:02:24.012532004 +0000 UTC m=+0.131354472 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, release-0.7.12=, io.openshift.expose-services=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Jan 31 10:02:25 compute-0 nova_compute[185194]: 2026-01-31 10:02:25.214 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:26 compute-0 nova_compute[185194]: 2026-01-31 10:02:26.710 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:29 compute-0 podman[201068]: time="2026-01-31T10:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:02:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:02:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 31 10:02:30 compute-0 nova_compute[185194]: 2026-01-31 10:02:30.218 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.698 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.698 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.706 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.709 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.710 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:02:30.710643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:02:30.712395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.718 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.723 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.724 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:02:30.725073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.726 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:02:30.727009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.753 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.754 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.755 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.788 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.789 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.789 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.790 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.790 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.791 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:02:30.791088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.865 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.866 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.866 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.955 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.956 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.956 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.957 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:02:30.957188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.976 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.96484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.994 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.1875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.995 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.996 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:02:30.995675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.996 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.996 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.996 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.996 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.997 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:02:30.997821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.999 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:02:30.999074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:30.999 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.000 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.000 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.000 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.000 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:02:31.001355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.001 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:02:31.002549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.002 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.003 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.003 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.003 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.003 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.004 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:02:31.004609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.005 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.006 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.006 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:02:31.005854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:02:31.006985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:02:31.008305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.008 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 3405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 35220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.009 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 76540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:02:31.009491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:02:31.010709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 4864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.012 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:02:31.011839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.012 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 3043 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:02:31.013484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.013 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.014 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.014 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.014 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.016 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.016 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.016 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.016 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4386558050 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.017 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:02:31.016109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.017 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.018 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:02:31.018312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.019 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.020 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:02:31.019515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.020 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.020 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.021 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.022 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.022 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:02:31.022118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.022 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.023 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.023 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.023 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:02:31.024595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.025 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.026 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:02:31.027 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:02:31 compute-0 openstack_network_exporter[204162]: ERROR   10:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:02:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:02:31 compute-0 openstack_network_exporter[204162]: ERROR   10:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:02:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:02:31 compute-0 nova_compute[185194]: 2026-01-31 10:02:31.712 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:31 compute-0 podman[239911]: 2026-01-31 10:02:31.975385903 +0000 UTC m=+0.101013788 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:02:35 compute-0 nova_compute[185194]: 2026-01-31 10:02:35.221 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:36 compute-0 nova_compute[185194]: 2026-01-31 10:02:36.715 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:37 compute-0 podman[239935]: 2026-01-31 10:02:37.966956699 +0000 UTC m=+0.087174488 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 10:02:39 compute-0 nova_compute[185194]: 2026-01-31 10:02:39.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:39 compute-0 nova_compute[185194]: 2026-01-31 10:02:39.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:39 compute-0 nova_compute[185194]: 2026-01-31 10:02:39.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:39 compute-0 podman[239956]: 2026-01-31 10:02:39.96881005 +0000 UTC m=+0.091305960 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 10:02:40 compute-0 nova_compute[185194]: 2026-01-31 10:02:40.225 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:40 compute-0 nova_compute[185194]: 2026-01-31 10:02:40.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:40 compute-0 nova_compute[185194]: 2026-01-31 10:02:40.604 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:02:40 compute-0 nova_compute[185194]: 2026-01-31 10:02:40.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:02:41 compute-0 nova_compute[185194]: 2026-01-31 10:02:41.455 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:02:41 compute-0 nova_compute[185194]: 2026-01-31 10:02:41.457 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:02:41 compute-0 nova_compute[185194]: 2026-01-31 10:02:41.457 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:02:41 compute-0 nova_compute[185194]: 2026-01-31 10:02:41.458 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:02:41 compute-0 nova_compute[185194]: 2026-01-31 10:02:41.717 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.123 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.142 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.143 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.144 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.144 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.144 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.145 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.174 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.175 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.175 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.176 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.301 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.355 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.357 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.405 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.407 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.451 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.452 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.498 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.504 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.554 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.555 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.641 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.643 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.703 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.705 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:02:43 compute-0 nova_compute[185194]: 2026-01-31 10:02:43.762 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.043 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.045 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=72.38211822509766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.046 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.046 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.136 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.137 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.138 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.138 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.213 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.229 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.232 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.233 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.693 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:02:44 compute-0 nova_compute[185194]: 2026-01-31 10:02:44.694 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:02:45 compute-0 nova_compute[185194]: 2026-01-31 10:02:45.227 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:45 compute-0 podman[240000]: 2026-01-31 10:02:45.970739482 +0000 UTC m=+0.090479900 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 31 10:02:46 compute-0 podman[239999]: 2026-01-31 10:02:46.005986866 +0000 UTC m=+0.124057693 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:02:46 compute-0 nova_compute[185194]: 2026-01-31 10:02:46.718 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:50 compute-0 nova_compute[185194]: 2026-01-31 10:02:50.232 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:50 compute-0 podman[240041]: 2026-01-31 10:02:50.967743551 +0000 UTC m=+0.090626734 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:02:51 compute-0 nova_compute[185194]: 2026-01-31 10:02:51.721 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:54 compute-0 podman[240065]: 2026-01-31 10:02:54.976477482 +0000 UTC m=+0.092718794 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, managed_by=edpm_ansible, name=ubi9)
Jan 31 10:02:54 compute-0 podman[240066]: 2026-01-31 10:02:54.97763924 +0000 UTC m=+0.084768061 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 10:02:55 compute-0 nova_compute[185194]: 2026-01-31 10:02:55.235 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:56 compute-0 nova_compute[185194]: 2026-01-31 10:02:56.724 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:02:59 compute-0 podman[201068]: time="2026-01-31T10:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:02:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:02:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 31 10:03:00 compute-0 nova_compute[185194]: 2026-01-31 10:03:00.238 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:01 compute-0 openstack_network_exporter[204162]: ERROR   10:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:03:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:03:01 compute-0 openstack_network_exporter[204162]: ERROR   10:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:03:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:03:01 compute-0 nova_compute[185194]: 2026-01-31 10:03:01.726 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:02 compute-0 podman[240104]: 2026-01-31 10:03:02.959194768 +0000 UTC m=+0.080836225 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:03:05 compute-0 nova_compute[185194]: 2026-01-31 10:03:05.243 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:06 compute-0 nova_compute[185194]: 2026-01-31 10:03:06.728 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:08 compute-0 podman[240128]: 2026-01-31 10:03:08.972474022 +0000 UTC m=+0.084705869 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1769056855, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=openstack_network_exporter, version=9.7, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container)
Jan 31 10:03:10 compute-0 nova_compute[185194]: 2026-01-31 10:03:10.247 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:10 compute-0 podman[240149]: 2026-01-31 10:03:10.935687024 +0000 UTC m=+0.062410427 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 31 10:03:11 compute-0 nova_compute[185194]: 2026-01-31 10:03:11.731 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:15 compute-0 nova_compute[185194]: 2026-01-31 10:03:15.250 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:03:16.428 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:03:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:03:16.428 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:03:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:03:16.429 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:03:16 compute-0 nova_compute[185194]: 2026-01-31 10:03:16.733 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:16 compute-0 podman[240169]: 2026-01-31 10:03:16.981949628 +0000 UTC m=+0.089897574 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 31 10:03:17 compute-0 podman[240168]: 2026-01-31 10:03:17.035289134 +0000 UTC m=+0.146226322 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 10:03:20 compute-0 nova_compute[185194]: 2026-01-31 10:03:20.256 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:21 compute-0 nova_compute[185194]: 2026-01-31 10:03:21.734 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:21 compute-0 podman[240211]: 2026-01-31 10:03:21.940739544 +0000 UTC m=+0.063822742 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:03:25 compute-0 nova_compute[185194]: 2026-01-31 10:03:25.259 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:25 compute-0 podman[240233]: 2026-01-31 10:03:25.942459889 +0000 UTC m=+0.063656237 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 31 10:03:25 compute-0 podman[240232]: 2026-01-31 10:03:25.949747156 +0000 UTC m=+0.073434645 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, name=ubi9, version=9.4, config_id=kepler, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 31 10:03:26 compute-0 nova_compute[185194]: 2026-01-31 10:03:26.736 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:29 compute-0 podman[201068]: time="2026-01-31T10:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:03:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:03:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 31 10:03:30 compute-0 nova_compute[185194]: 2026-01-31 10:03:30.262 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:31 compute-0 openstack_network_exporter[204162]: ERROR   10:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:03:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:03:31 compute-0 openstack_network_exporter[204162]: ERROR   10:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:03:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:03:31 compute-0 nova_compute[185194]: 2026-01-31 10:03:31.738 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:33 compute-0 podman[240269]: 2026-01-31 10:03:33.918945925 +0000 UTC m=+0.048843737 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:03:35 compute-0 nova_compute[185194]: 2026-01-31 10:03:35.267 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:36 compute-0 nova_compute[185194]: 2026-01-31 10:03:36.741 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:39 compute-0 nova_compute[185194]: 2026-01-31 10:03:39.608 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:39 compute-0 podman[240293]: 2026-01-31 10:03:39.937586778 +0000 UTC m=+0.064126679 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, version=9.7, distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal)
Jan 31 10:03:40 compute-0 nova_compute[185194]: 2026-01-31 10:03:40.272 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:40 compute-0 nova_compute[185194]: 2026-01-31 10:03:40.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:40 compute-0 nova_compute[185194]: 2026-01-31 10:03:40.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:41 compute-0 nova_compute[185194]: 2026-01-31 10:03:41.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:41 compute-0 nova_compute[185194]: 2026-01-31 10:03:41.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:03:41 compute-0 nova_compute[185194]: 2026-01-31 10:03:41.742 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:42 compute-0 podman[240313]: 2026-01-31 10:03:42.028998745 +0000 UTC m=+0.147080083 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 10:03:42 compute-0 nova_compute[185194]: 2026-01-31 10:03:42.302 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:03:42 compute-0 nova_compute[185194]: 2026-01-31 10:03:42.302 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:03:42 compute-0 nova_compute[185194]: 2026-01-31 10:03:42.302 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.379 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.396 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.397 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.398 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.399 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.399 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.423 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.424 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.425 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.425 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.503 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.581 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.582 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.641 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.642 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.686 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.687 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.733 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.738 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.823 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.824 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.884 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.886 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.935 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.937 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:03:43 compute-0 nova_compute[185194]: 2026-01-31 10:03:43.995 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.321 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.322 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5066MB free_disk=72.3812026977539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.322 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.322 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.402 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.403 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.403 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.403 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.421 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.438 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.438 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.454 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.472 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.533 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.552 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.555 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:03:44 compute-0 nova_compute[185194]: 2026-01-31 10:03:44.555 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:03:45 compute-0 nova_compute[185194]: 2026-01-31 10:03:45.278 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:45 compute-0 nova_compute[185194]: 2026-01-31 10:03:45.762 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:45 compute-0 nova_compute[185194]: 2026-01-31 10:03:45.788 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:45 compute-0 nova_compute[185194]: 2026-01-31 10:03:45.788 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:03:45 compute-0 nova_compute[185194]: 2026-01-31 10:03:45.789 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:03:46 compute-0 nova_compute[185194]: 2026-01-31 10:03:46.745 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:48 compute-0 podman[240356]: 2026-01-31 10:03:48.005915536 +0000 UTC m=+0.122225141 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:03:48 compute-0 podman[240355]: 2026-01-31 10:03:48.015667473 +0000 UTC m=+0.131973438 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:03:50 compute-0 nova_compute[185194]: 2026-01-31 10:03:50.282 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:51 compute-0 nova_compute[185194]: 2026-01-31 10:03:51.749 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:52 compute-0 podman[240400]: 2026-01-31 10:03:52.956809279 +0000 UTC m=+0.076142051 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:03:55 compute-0 nova_compute[185194]: 2026-01-31 10:03:55.285 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:56 compute-0 nova_compute[185194]: 2026-01-31 10:03:56.751 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:03:56 compute-0 podman[240424]: 2026-01-31 10:03:56.976830769 +0000 UTC m=+0.104078799 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, config_id=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0)
Jan 31 10:03:56 compute-0 podman[240425]: 2026-01-31 10:03:56.97768457 +0000 UTC m=+0.104111800 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:03:59 compute-0 podman[201068]: time="2026-01-31T10:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:03:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:03:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 31 10:04:00 compute-0 nova_compute[185194]: 2026-01-31 10:04:00.289 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:01 compute-0 openstack_network_exporter[204162]: ERROR   10:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:04:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:04:01 compute-0 openstack_network_exporter[204162]: ERROR   10:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:04:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:04:01 compute-0 nova_compute[185194]: 2026-01-31 10:04:01.753 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:04 compute-0 podman[240465]: 2026-01-31 10:04:04.938168264 +0000 UTC m=+0.063613864 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:04:05 compute-0 nova_compute[185194]: 2026-01-31 10:04:05.292 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:06 compute-0 nova_compute[185194]: 2026-01-31 10:04:06.755 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:10 compute-0 nova_compute[185194]: 2026-01-31 10:04:10.295 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:10 compute-0 podman[240489]: 2026-01-31 10:04:10.958081325 +0000 UTC m=+0.074057825 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9/ubi-minimal)
Jan 31 10:04:11 compute-0 nova_compute[185194]: 2026-01-31 10:04:11.757 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:12 compute-0 podman[240510]: 2026-01-31 10:04:12.964603347 +0000 UTC m=+0.082343005 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 10:04:15 compute-0 nova_compute[185194]: 2026-01-31 10:04:15.298 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:04:16.429 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:04:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:04:16.430 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:04:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:04:16.431 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:04:16 compute-0 nova_compute[185194]: 2026-01-31 10:04:16.759 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:18 compute-0 podman[240530]: 2026-01-31 10:04:18.947191928 +0000 UTC m=+0.065010787 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:04:19 compute-0 podman[240529]: 2026-01-31 10:04:19.001521198 +0000 UTC m=+0.122191186 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 10:04:20 compute-0 nova_compute[185194]: 2026-01-31 10:04:20.301 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:21 compute-0 nova_compute[185194]: 2026-01-31 10:04:21.763 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:23 compute-0 podman[240573]: 2026-01-31 10:04:23.951364614 +0000 UTC m=+0.073247146 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:04:25 compute-0 nova_compute[185194]: 2026-01-31 10:04:25.306 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:26 compute-0 nova_compute[185194]: 2026-01-31 10:04:26.764 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:27 compute-0 podman[240597]: 2026-01-31 10:04:27.954535058 +0000 UTC m=+0.078406010 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_id=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, vendor=Red Hat, Inc.)
Jan 31 10:04:27 compute-0 podman[240598]: 2026-01-31 10:04:27.963893783 +0000 UTC m=+0.089589989 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:04:29 compute-0 podman[201068]: time="2026-01-31T10:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:04:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:04:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 31 10:04:30 compute-0 nova_compute[185194]: 2026-01-31 10:04:30.310 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.698 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.698 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.698 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.705 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.709 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:04:30.710271) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:04:30.712793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.716 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.719 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.721 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:04:30.720331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:04:30.721747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.739 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.740 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.740 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.762 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.762 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.763 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.763 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.764 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:04:30.764244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.817 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.817 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.817 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.875 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.875 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.876 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.877 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.878 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.878 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.878 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:04:30.879004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.902 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.929 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.1875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.931 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.931 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.931 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.932 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.932 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.932 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:04:30.930808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.933 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.934 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.934 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:04:30.934007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.936 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:04:30.936156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.936 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.937 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.937 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.937 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.938 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.939 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.939 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.939 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.939 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.940 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.940 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:04:30.939902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.941 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.942 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.942 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.943 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:04:30.942175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.943 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.944 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.944 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.945 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.945 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.946 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.946 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.947 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.948 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.949 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:04:30.946733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.949 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.951 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.952 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.953 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.953 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:04:30.949427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:04:30.953326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.954 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.955 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.955 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.957 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.958 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.958 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.958 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 36770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.959 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 196150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:04:30.955392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.960 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.961 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.961 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 4934 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.962 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.963 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.963 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:04:30.958430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:04:30.961440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:04:30.963177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.965 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.965 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.965 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.965 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.966 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.966 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.968 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.968 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.968 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.969 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4386558050 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.969 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.969 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:04:30.964884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:04:30.967923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.970 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.971 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.972 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.973 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.973 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.973 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.974 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.974 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:04:30.970780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:04:30.972773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.975 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.976 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.976 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.976 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.977 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.977 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.978 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.978 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.979 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.980 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.980 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.980 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.980 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.980 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.981 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.982 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.983 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:04:30.975744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:04:30.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:04:30.978781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:04:31 compute-0 openstack_network_exporter[204162]: ERROR   10:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:04:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:04:31 compute-0 openstack_network_exporter[204162]: ERROR   10:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:04:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:04:31 compute-0 nova_compute[185194]: 2026-01-31 10:04:31.767 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:35 compute-0 nova_compute[185194]: 2026-01-31 10:04:35.313 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:35 compute-0 podman[240636]: 2026-01-31 10:04:35.958111829 +0000 UTC m=+0.083727429 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:04:36 compute-0 nova_compute[185194]: 2026-01-31 10:04:36.769 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:38 compute-0 nova_compute[185194]: 2026-01-31 10:04:38.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:38 compute-0 nova_compute[185194]: 2026-01-31 10:04:38.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 10:04:40 compute-0 nova_compute[185194]: 2026-01-31 10:04:40.317 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:40 compute-0 nova_compute[185194]: 2026-01-31 10:04:40.622 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:41 compute-0 nova_compute[185194]: 2026-01-31 10:04:41.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:41 compute-0 nova_compute[185194]: 2026-01-31 10:04:41.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:04:41 compute-0 nova_compute[185194]: 2026-01-31 10:04:41.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:04:41 compute-0 nova_compute[185194]: 2026-01-31 10:04:41.772 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:41 compute-0 podman[240659]: 2026-01-31 10:04:41.938593941 +0000 UTC m=+0.068109132 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:04:42 compute-0 nova_compute[185194]: 2026-01-31 10:04:42.290 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:04:42 compute-0 nova_compute[185194]: 2026-01-31 10:04:42.291 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:04:42 compute-0 nova_compute[185194]: 2026-01-31 10:04:42.291 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:04:42 compute-0 nova_compute[185194]: 2026-01-31 10:04:42.291 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.403 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.446 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.447 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.448 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.448 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.449 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.449 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.450 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 10:04:43 compute-0 nova_compute[185194]: 2026-01-31 10:04:43.463 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 10:04:43 compute-0 podman[240682]: 2026-01-31 10:04:43.997166206 +0000 UTC m=+0.117156604 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.457 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.638 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.640 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.765 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.846 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.848 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.913 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.916 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.987 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:44 compute-0 nova_compute[185194]: 2026-01-31 10:04:44.989 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.042 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.047 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.096 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.097 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.164 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.166 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.217 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.218 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.265 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.320 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.574 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.576 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5066MB free_disk=72.38147354125977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.577 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.578 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.743 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.744 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.745 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.746 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.909 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.925 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.927 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:04:45 compute-0 nova_compute[185194]: 2026-01-31 10:04:45.928 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:04:46 compute-0 nova_compute[185194]: 2026-01-31 10:04:46.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:46 compute-0 nova_compute[185194]: 2026-01-31 10:04:46.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:46 compute-0 nova_compute[185194]: 2026-01-31 10:04:46.774 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:47 compute-0 nova_compute[185194]: 2026-01-31 10:04:47.620 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:04:47 compute-0 nova_compute[185194]: 2026-01-31 10:04:47.621 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:04:49 compute-0 podman[240726]: 2026-01-31 10:04:49.948893474 +0000 UTC m=+0.069326711 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 31 10:04:49 compute-0 podman[240725]: 2026-01-31 10:04:49.982392731 +0000 UTC m=+0.110963574 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 10:04:50 compute-0 nova_compute[185194]: 2026-01-31 10:04:50.323 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:51 compute-0 nova_compute[185194]: 2026-01-31 10:04:51.776 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:54 compute-0 podman[240767]: 2026-01-31 10:04:54.979466675 +0000 UTC m=+0.096371863 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:04:55 compute-0 nova_compute[185194]: 2026-01-31 10:04:55.328 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:56 compute-0 nova_compute[185194]: 2026-01-31 10:04:56.780 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:04:58 compute-0 podman[240792]: 2026-01-31 10:04:58.948069686 +0000 UTC m=+0.071403291 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 10:04:58 compute-0 podman[240791]: 2026-01-31 10:04:58.971153153 +0000 UTC m=+0.098361712 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Jan 31 10:04:59 compute-0 podman[201068]: time="2026-01-31T10:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:04:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:04:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 31 10:05:00 compute-0 nova_compute[185194]: 2026-01-31 10:05:00.332 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:01 compute-0 openstack_network_exporter[204162]: ERROR   10:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:05:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:05:01 compute-0 openstack_network_exporter[204162]: ERROR   10:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:05:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:05:01 compute-0 nova_compute[185194]: 2026-01-31 10:05:01.781 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:05 compute-0 nova_compute[185194]: 2026-01-31 10:05:05.335 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:06 compute-0 nova_compute[185194]: 2026-01-31 10:05:06.783 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:06 compute-0 podman[240830]: 2026-01-31 10:05:06.966444793 +0000 UTC m=+0.085453159 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:05:10 compute-0 nova_compute[185194]: 2026-01-31 10:05:10.338 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:11 compute-0 nova_compute[185194]: 2026-01-31 10:05:11.787 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:12 compute-0 podman[240855]: 2026-01-31 10:05:12.985484147 +0000 UTC m=+0.105870223 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, version=9.7, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, architecture=x86_64, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1769056855, com.redhat.component=ubi9-minimal-container)
Jan 31 10:05:14 compute-0 podman[240877]: 2026-01-31 10:05:14.751199359 +0000 UTC m=+0.077193649 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 10:05:15 compute-0 nova_compute[185194]: 2026-01-31 10:05:15.340 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:05:16.430 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:05:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:05:16.432 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:05:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:05:16.433 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:05:16 compute-0 nova_compute[185194]: 2026-01-31 10:05:16.789 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:20 compute-0 nova_compute[185194]: 2026-01-31 10:05:20.342 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:20 compute-0 podman[240899]: 2026-01-31 10:05:20.983498394 +0000 UTC m=+0.097137501 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 31 10:05:21 compute-0 podman[240898]: 2026-01-31 10:05:21.036545858 +0000 UTC m=+0.153976507 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 10:05:21 compute-0 nova_compute[185194]: 2026-01-31 10:05:21.792 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:25 compute-0 nova_compute[185194]: 2026-01-31 10:05:25.346 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:25 compute-0 podman[240941]: 2026-01-31 10:05:25.453135533 +0000 UTC m=+0.077861906 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:05:26 compute-0 nova_compute[185194]: 2026-01-31 10:05:26.794 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:29 compute-0 podman[201068]: time="2026-01-31T10:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:05:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:05:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 31 10:05:29 compute-0 podman[240964]: 2026-01-31 10:05:29.965854542 +0000 UTC m=+0.095489522 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, version=9.4, io.openshift.tags=base rhel9)
Jan 31 10:05:29 compute-0 podman[240965]: 2026-01-31 10:05:29.997659032 +0000 UTC m=+0.115820764 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 10:05:30 compute-0 nova_compute[185194]: 2026-01-31 10:05:30.351 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:31 compute-0 openstack_network_exporter[204162]: ERROR   10:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:05:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:05:31 compute-0 openstack_network_exporter[204162]: ERROR   10:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:05:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:05:31 compute-0 nova_compute[185194]: 2026-01-31 10:05:31.796 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:35 compute-0 nova_compute[185194]: 2026-01-31 10:05:35.354 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:36 compute-0 nova_compute[185194]: 2026-01-31 10:05:36.798 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:37 compute-0 podman[241001]: 2026-01-31 10:05:37.954863182 +0000 UTC m=+0.082830886 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:05:40 compute-0 nova_compute[185194]: 2026-01-31 10:05:40.358 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:41 compute-0 nova_compute[185194]: 2026-01-31 10:05:41.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:41 compute-0 nova_compute[185194]: 2026-01-31 10:05:41.608 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:05:41 compute-0 nova_compute[185194]: 2026-01-31 10:05:41.801 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:42 compute-0 nova_compute[185194]: 2026-01-31 10:05:42.311 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:05:42 compute-0 nova_compute[185194]: 2026-01-31 10:05:42.313 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:05:42 compute-0 nova_compute[185194]: 2026-01-31 10:05:42.313 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.529 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.547 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.548 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.549 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.550 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.550 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:43 compute-0 nova_compute[185194]: 2026-01-31 10:05:43.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:43 compute-0 podman[241024]: 2026-01-31 10:05:43.965297917 +0000 UTC m=+0.091624168 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, build-date=2026-01-22T05:09:47Z, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 10:05:44 compute-0 nova_compute[185194]: 2026-01-31 10:05:44.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:44 compute-0 podman[241045]: 2026-01-31 10:05:44.9777775 +0000 UTC m=+0.103662019 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:05:45 compute-0 nova_compute[185194]: 2026-01-31 10:05:45.522 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:45 compute-0 nova_compute[185194]: 2026-01-31 10:05:45.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:45 compute-0 nova_compute[185194]: 2026-01-31 10:05:45.619 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.631 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.632 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.632 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.632 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.723 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.803 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.808 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.809 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.872 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.874 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.931 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.932 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.984 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:46 compute-0 nova_compute[185194]: 2026-01-31 10:05:46.995 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.082 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.083 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.141 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.142 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.190 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.192 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.280 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.643 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.645 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5063MB free_disk=72.38153076171875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.645 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.646 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.763 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.764 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.765 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.766 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.856 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.878 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.882 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:05:47 compute-0 nova_compute[185194]: 2026-01-31 10:05:47.883 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:05:49 compute-0 nova_compute[185194]: 2026-01-31 10:05:49.885 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:05:49 compute-0 nova_compute[185194]: 2026-01-31 10:05:49.887 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:05:50 compute-0 nova_compute[185194]: 2026-01-31 10:05:50.526 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:51 compute-0 nova_compute[185194]: 2026-01-31 10:05:51.806 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:51 compute-0 podman[241091]: 2026-01-31 10:05:51.981646418 +0000 UTC m=+0.098082905 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 10:05:52 compute-0 podman[241090]: 2026-01-31 10:05:52.018683115 +0000 UTC m=+0.141276721 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:05:55 compute-0 nova_compute[185194]: 2026-01-31 10:05:55.551 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:55 compute-0 podman[241134]: 2026-01-31 10:05:55.960400717 +0000 UTC m=+0.079820423 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:05:56 compute-0 nova_compute[185194]: 2026-01-31 10:05:56.809 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:05:59 compute-0 podman[201068]: time="2026-01-31T10:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:05:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:05:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Jan 31 10:06:00 compute-0 nova_compute[185194]: 2026-01-31 10:06:00.557 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:00 compute-0 podman[241158]: 2026-01-31 10:06:00.950802958 +0000 UTC m=+0.074076054 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_id=kepler, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9)
Jan 31 10:06:00 compute-0 podman[241159]: 2026-01-31 10:06:00.994476565 +0000 UTC m=+0.112983866 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:06:01 compute-0 openstack_network_exporter[204162]: ERROR   10:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:06:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:06:01 compute-0 openstack_network_exporter[204162]: ERROR   10:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:06:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:06:01 compute-0 nova_compute[185194]: 2026-01-31 10:06:01.812 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:05 compute-0 nova_compute[185194]: 2026-01-31 10:06:05.560 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:06 compute-0 nova_compute[185194]: 2026-01-31 10:06:06.814 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:08 compute-0 nova_compute[185194]: 2026-01-31 10:06:08.476 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:08 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:08.477 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:06:08 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:08.478 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:06:09 compute-0 podman[241192]: 2026-01-31 10:06:09.006043354 +0000 UTC m=+0.126569112 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:06:10 compute-0 nova_compute[185194]: 2026-01-31 10:06:10.564 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:11 compute-0 nova_compute[185194]: 2026-01-31 10:06:11.815 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:14 compute-0 podman[241213]: 2026-01-31 10:06:14.735339299 +0000 UTC m=+0.067016004 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, release=1769056855, vcs-type=git, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., io.openshift.expose-services=)
Jan 31 10:06:15 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:15.481 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:15 compute-0 nova_compute[185194]: 2026-01-31 10:06:15.568 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:16 compute-0 podman[241234]: 2026-01-31 10:06:16.013099893 +0000 UTC m=+0.129607801 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:06:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:16.432 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:16.433 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:16.434 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:16 compute-0 nova_compute[185194]: 2026-01-31 10:06:16.818 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.501 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.501 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.519 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.598 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.599 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.614 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.615 185198 INFO nova.compute.claims [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.772 185198 DEBUG nova.compute.provider_tree [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.788 185198 DEBUG nova.scheduler.client.report [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.837 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.838 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.900 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.901 185198 DEBUG nova.network.neutron [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.926 185198 INFO nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:06:17 compute-0 nova_compute[185194]: 2026-01-31 10:06:17.976 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.113 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.116 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.117 185198 INFO nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Creating image(s)
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.118 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.119 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.120 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.147 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.195 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.197 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.198 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.227 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.276 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.277 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.318 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.319 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.320 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.391 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.393 185198 DEBUG nova.virt.disk.api [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Checking if we can resize image /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.394 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.469 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.471 185198 DEBUG nova.virt.disk.api [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Cannot resize image /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.472 185198 DEBUG nova.objects.instance [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'migration_context' on Instance uuid 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.489 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.489 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.490 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.512 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.594 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.596 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.597 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.625 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.691 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.693 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.741 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.743 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.744 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.816 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.818 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.818 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Ensure instance console log exists: /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.819 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.820 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:18 compute-0 nova_compute[185194]: 2026-01-31 10:06:18.821 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.116 185198 DEBUG nova.network.neutron [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Successfully updated port: fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.145 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.146 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.146 185198 DEBUG nova.network.neutron [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.256 185198 DEBUG nova.compute.manager [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-changed-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.256 185198 DEBUG nova.compute.manager [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Refreshing instance network info cache due to event network-changed-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.257 185198 DEBUG oslo_concurrency.lockutils [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:06:19 compute-0 nova_compute[185194]: 2026-01-31 10:06:19.304 185198 DEBUG nova.network.neutron [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.065 185198 DEBUG nova.network.neutron [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.084 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.084 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Instance network_info: |[{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.085 185198 DEBUG oslo_concurrency.lockutils [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.085 185198 DEBUG nova.network.neutron [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Refreshing network info cache for port fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.090 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Start _get_guest_xml network_info=[{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 1, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.097 185198 WARNING nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.106 185198 DEBUG nova.virt.libvirt.host [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.108 185198 DEBUG nova.virt.libvirt.host [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.114 185198 DEBUG nova.virt.libvirt.host [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.114 185198 DEBUG nova.virt.libvirt.host [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.115 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.115 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T09:57:04Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='5ace5526-788a-41cf-9e40-e75da8858688',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.116 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.116 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.117 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.117 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.117 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.118 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.118 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.119 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.119 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.120 185198 DEBUG nova.virt.hardware [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.124 185198 DEBUG nova.virt.libvirt.vif [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:06:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3',id=3,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-1vrnt46k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:06:18Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09OTA1NzUzMTkyNjUzMTA1MTUyNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 31 10:06:20 compute-0 nova_compute[185194]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09OTA1NzUzMTkyNjUzMTA1MTUyNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=37c4cead-85b0-43c5-9ae1-9b6b45d7a497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.125 185198 DEBUG nova.network.os_vif_util [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.126 185198 DEBUG nova.network.os_vif_util [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.127 185198 DEBUG nova.objects.instance [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.146 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <uuid>37c4cead-85b0-43c5-9ae1-9b6b45d7a497</uuid>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <name>instance-00000003</name>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <memory>524288</memory>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:name>vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3</nova:name>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:06:20</nova:creationTime>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:flavor name="m1.small">
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:memory>512</nova:memory>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:ephemeral>1</nova:ephemeral>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:user uuid="d3342a7282114996b6010246d4ade24e">admin</nova:user>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:project uuid="155389cbed6644acacdbeeb6155adb54">admin</nova:project>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="8b57d666-88c0-4e62-a76a-0d45801ca1a6"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         <nova:port uuid="fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f">
Jan 31 10:06:20 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="192.168.0.107" ipVersion="4"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <system>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <entry name="serial">37c4cead-85b0-43c5-9ae1-9b6b45d7a497</entry>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <entry name="uuid">37c4cead-85b0-43c5-9ae1-9b6b45d7a497</entry>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </system>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <os>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </os>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <features>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </features>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <target dev="vdb" bus="virtio"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.config"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:f8:a6:6c"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <target dev="tapfd2acd14-79"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </interface>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/console.log" append="off"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <video>
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </video>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:06:20 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:06:20 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:06:20 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:06:20 compute-0 nova_compute[185194]: </domain>
Jan 31 10:06:20 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.147 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Preparing to wait for external event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.148 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.148 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.148 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.149 185198 DEBUG nova.virt.libvirt.vif [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:06:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3',id=3,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-1vrnt46k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:06:18Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09OTA1NzUzMTkyNjUzMTA1MTUyNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 31 10:06:20 compute-0 nova_compute[185194]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09OTA1NzUzMTkyNjUzMTA1MTUyNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=37c4cead-85b0-43c5-9ae1-9b6b45d7a497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.150 185198 DEBUG nova.network.os_vif_util [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.151 185198 DEBUG nova.network.os_vif_util [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.151 185198 DEBUG os_vif [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.152 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.152 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.153 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.157 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.158 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd2acd14-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.158 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd2acd14-79, col_values=(('external_ids', {'iface-id': 'fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:a6:6c', 'vm-uuid': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.161 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.163 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:06:20 compute-0 NetworkManager[56281]: <info>  [1769853980.1636] manager: (tapfd2acd14-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.172 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.174 185198 INFO os_vif [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79')
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.257 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.258 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.259 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.260 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No VIF found with MAC fa:16:3e:f8:a6:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.261 185198 INFO nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Using config drive
Jan 31 10:06:20 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:06:20.124 185198 DEBUG nova.virt.libvirt.vif [None req-fbddd497-ef [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:06:20 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:06:20.149 185198 DEBUG nova.virt.libvirt.vif [None req-fbddd497-ef [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.574 185198 INFO nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Creating config drive at /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.config
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.580 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbv_2frxr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.706 185198 DEBUG oslo_concurrency.processutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbv_2frxr" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:20 compute-0 kernel: tapfd2acd14-79: entered promiscuous mode
Jan 31 10:06:20 compute-0 ovn_controller[97627]: 2026-01-31T10:06:20Z|00040|binding|INFO|Claiming lport fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f for this chassis.
Jan 31 10:06:20 compute-0 ovn_controller[97627]: 2026-01-31T10:06:20Z|00041|binding|INFO|fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f: Claiming fa:16:3e:f8:a6:6c 192.168.0.107
Jan 31 10:06:20 compute-0 NetworkManager[56281]: <info>  [1769853980.7914] manager: (tapfd2acd14-79): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.793 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 ovn_controller[97627]: 2026-01-31T10:06:20Z|00042|binding|INFO|Setting lport fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f ovn-installed in OVS
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.809 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.814 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 systemd-udevd[241300]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:06:20 compute-0 NetworkManager[56281]: <info>  [1769853980.8321] device (tapfd2acd14-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 10:06:20 compute-0 NetworkManager[56281]: <info>  [1769853980.8422] device (tapfd2acd14-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 10:06:20 compute-0 ovn_controller[97627]: 2026-01-31T10:06:20Z|00043|binding|INFO|Setting lport fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f up in Southbound
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.864 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:a6:6c 192.168.0.107'], port_security=['fa:16:3e:f8:a6:6c 192.168.0.107'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wbazt7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-port-d3sykcgvkxbi', 'neutron:cidrs': '192.168.0.107/24', 'neutron:device_id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wbazt7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-port-d3sykcgvkxbi', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.867 106883 INFO neutron.agent.ovn.metadata.agent [-] Port fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b bound to our chassis
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.870 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.881 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[57e22ff2-ad4e-44bb-bcde-42474053d20d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:06:20 compute-0 systemd-machined[156556]: New machine qemu-3-instance-00000003.
Jan 31 10:06:20 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.915 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[77bb5190-9ca6-46dd-927f-f9b2a884c367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.925 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[f29d1b10-6814-4b0d-885b-a03312a2011d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.948 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[50761aa0-e65f-47a9-9139-2f196ebb7a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.964 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f916a8bf-70ab-4c81-b242-a3462659e5e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 16797, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241314, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.983 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[0398b43f-f14f-42aa-875e-e187261ffb30]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374469, 'tstamp': 374469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241317, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374472, 'tstamp': 374472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241317, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.985 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.988 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 nova_compute[185194]: 2026-01-31 10:06:20.990 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.992 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.993 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.994 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:06:20 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:06:20.995 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.541 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853981.540815, 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.542 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] VM Started (Lifecycle Event)
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.567 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.573 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853981.5409403, 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.574 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] VM Paused (Lifecycle Event)
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.596 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.603 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.627 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:06:21 compute-0 nova_compute[185194]: 2026-01-31 10:06:21.822 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.431 185198 DEBUG nova.compute.manager [req-c1808455-ea94-49b4-bb0f-4a7f42b03bea req-3696dd84-1b1f-4e76-976b-3bd5594523a8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.431 185198 DEBUG oslo_concurrency.lockutils [req-c1808455-ea94-49b4-bb0f-4a7f42b03bea req-3696dd84-1b1f-4e76-976b-3bd5594523a8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.432 185198 DEBUG oslo_concurrency.lockutils [req-c1808455-ea94-49b4-bb0f-4a7f42b03bea req-3696dd84-1b1f-4e76-976b-3bd5594523a8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.432 185198 DEBUG oslo_concurrency.lockutils [req-c1808455-ea94-49b4-bb0f-4a7f42b03bea req-3696dd84-1b1f-4e76-976b-3bd5594523a8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.432 185198 DEBUG nova.compute.manager [req-c1808455-ea94-49b4-bb0f-4a7f42b03bea req-3696dd84-1b1f-4e76-976b-3bd5594523a8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Processing event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.433 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.437 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769853982.4374235, 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.438 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] VM Resumed (Lifecycle Event)
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.439 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.443 185198 INFO nova.virt.libvirt.driver [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Instance spawned successfully.
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.444 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.464 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.469 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.470 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.470 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.471 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.471 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.472 185198 DEBUG nova.virt.libvirt.driver [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.477 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.504 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.530 185198 INFO nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Took 4.42 seconds to spawn the instance on the hypervisor.
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.530 185198 DEBUG nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.598 185198 INFO nova.compute.manager [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Took 5.03 seconds to build instance.
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.614 185198 DEBUG oslo_concurrency.lockutils [None req-fbddd497-efda-46d5-a1fd-d9f760a5d429 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.680 185198 DEBUG nova.network.neutron [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updated VIF entry in instance network info cache for port fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.681 185198 DEBUG nova.network.neutron [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:06:22 compute-0 nova_compute[185194]: 2026-01-31 10:06:22.704 185198 DEBUG oslo_concurrency.lockutils [req-711d9988-fb64-4d0e-a47c-6f7c5abcf43f req-d0bc6c9c-a189-474d-8bab-4b15a6cdfdf0 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:06:22 compute-0 podman[241327]: 2026-01-31 10:06:22.969829313 +0000 UTC m=+0.085145235 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 31 10:06:22 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 10:06:23 compute-0 podman[241326]: 2026-01-31 10:06:22.999794413 +0000 UTC m=+0.117457946 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 10:06:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 10:06:24 compute-0 nova_compute[185194]: 2026-01-31 10:06:24.961 185198 DEBUG nova.compute.manager [req-5c4a0769-acd1-4eef-82d7-0b69706aafa1 req-fefc622b-4281-4b5a-b2e3-566eeebdeabb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:06:24 compute-0 nova_compute[185194]: 2026-01-31 10:06:24.962 185198 DEBUG oslo_concurrency.lockutils [req-5c4a0769-acd1-4eef-82d7-0b69706aafa1 req-fefc622b-4281-4b5a-b2e3-566eeebdeabb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:24 compute-0 nova_compute[185194]: 2026-01-31 10:06:24.962 185198 DEBUG oslo_concurrency.lockutils [req-5c4a0769-acd1-4eef-82d7-0b69706aafa1 req-fefc622b-4281-4b5a-b2e3-566eeebdeabb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:24 compute-0 nova_compute[185194]: 2026-01-31 10:06:24.962 185198 DEBUG oslo_concurrency.lockutils [req-5c4a0769-acd1-4eef-82d7-0b69706aafa1 req-fefc622b-4281-4b5a-b2e3-566eeebdeabb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:24 compute-0 nova_compute[185194]: 2026-01-31 10:06:24.963 185198 DEBUG nova.compute.manager [req-5c4a0769-acd1-4eef-82d7-0b69706aafa1 req-fefc622b-4281-4b5a-b2e3-566eeebdeabb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] No waiting events found dispatching network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:06:24 compute-0 nova_compute[185194]: 2026-01-31 10:06:24.963 185198 WARNING nova.compute.manager [req-5c4a0769-acd1-4eef-82d7-0b69706aafa1 req-fefc622b-4281-4b5a-b2e3-566eeebdeabb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received unexpected event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f for instance with vm_state active and task_state None.
Jan 31 10:06:25 compute-0 nova_compute[185194]: 2026-01-31 10:06:25.161 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:26 compute-0 nova_compute[185194]: 2026-01-31 10:06:26.825 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:26 compute-0 podman[241393]: 2026-01-31 10:06:26.961838463 +0000 UTC m=+0.086224273 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:06:29 compute-0 podman[201068]: time="2026-01-31T10:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:06:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:06:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 31 10:06:30 compute-0 nova_compute[185194]: 2026-01-31 10:06:30.164 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.699 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.699 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.699 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.708 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.714 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.716 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:06:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:30.719 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/37c4cead-85b0-43c5-9ae1-9b6b45d7a497 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.189 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Sat, 31 Jan 2026 10:06:30 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f8eca0ba-7c6c-43c8-aad8-4af447feb002 x-openstack-request-id: req-f8eca0ba-7c6c-43c8-aad8-4af447feb002 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.190 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "37c4cead-85b0-43c5-9ae1-9b6b45d7a497", "name": "vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3", "status": "ACTIVE", "tenant_id": "155389cbed6644acacdbeeb6155adb54", "user_id": "d3342a7282114996b6010246d4ade24e", "metadata": {"metering.server_group": "cd99fa32-2992-4cd0-a9a0-648127ea67dc"}, "hostId": "67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d", "image": {"id": "8b57d666-88c0-4e62-a76a-0d45801ca1a6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8b57d666-88c0-4e62-a76a-0d45801ca1a6"}]}, "flavor": {"id": "5ace5526-788a-41cf-9e40-e75da8858688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/5ace5526-788a-41cf-9e40-e75da8858688"}]}, "created": "2026-01-31T10:06:15Z", "updated": "2026-01-31T10:06:22Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.107", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f8:a6:6c"}, {"version": 4, "addr": "192.168.122.178", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f8:a6:6c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/37c4cead-85b0-43c5-9ae1-9b6b45d7a497"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/37c4cead-85b0-43c5-9ae1-9b6b45d7a497"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-31T10:06:22.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.190 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/37c4cead-85b0-43c5-9ae1-9b6b45d7a497 used request id req-f8eca0ba-7c6c-43c8-aad8-4af447feb002 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.191 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.192 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.194 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:06:31.192840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:06:31.194599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.199 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.203 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.206 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 / tapfd2acd14-79 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.207 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:06:31.208314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:06:31.209517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.231 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.231 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.231 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.253 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.254 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.254 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.278 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.279 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.279 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.280 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:06:31.280539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.337 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.338 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.338 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.411 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.412 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.412 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 openstack_network_exporter[204162]: ERROR   10:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:06:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:06:31 compute-0 openstack_network_exporter[204162]: ERROR   10:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:06:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.500 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.501 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.501 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.503 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:06:31.503789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.530 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.548 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.21875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.566 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.567 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497: ceilometer.compute.pollsters.NoVolumeException
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.568 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.568 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:06:31.568177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.569 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.569 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.569 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.570 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.570 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 685616016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.570 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.570 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 4120788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.572 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.572 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.572 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.573 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.574 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:06:31.572194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:06:31.574138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.574 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.575 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.575 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.575 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.576 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.576 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.576 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.577 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.578 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.578 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.578 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:06:31.578351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.579 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.579 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.580 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.580 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.580 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.581 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.581 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.581 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.582 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.582 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.582 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:06:31.580291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.584 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3>]
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-31T10:06:31.584568) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.585 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.586 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.586 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:06:31.585691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:06:31.587261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.589 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:06:31.588988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.589 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 45 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.589 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.590 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.591 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.591 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.591 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:06:31.590600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.592 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 38450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.592 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 316520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.593 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 8860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:06:31.592610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 5004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.594 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.596 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:06:31.594364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:06:31.596016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.596 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.596 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.597 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:06:31.597658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.598 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.598 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.598 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.598 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.598 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.599 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.599 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.599 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.600 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3>]
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-31T10:06:31.600792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.602 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:06:31.601551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.602 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.602 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4386558050 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.602 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.602 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.603 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.603 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.603 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.604 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.606 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.606 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:06:31.604469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.606 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.606 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.606 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.607 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.607 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.607 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.607 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.608 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.609 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.609 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.609 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.609 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.610 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.610 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.610 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.610 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.611 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.611 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.612 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:06:31.605923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.612 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.612 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.612 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:06:31.608863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:06:31.611938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.613 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.614 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.615 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.616 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:06:31.617 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:06:31 compute-0 nova_compute[185194]: 2026-01-31 10:06:31.828 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:31 compute-0 podman[241417]: 2026-01-31 10:06:31.985647734 +0000 UTC m=+0.106159862 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=kepler, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, vcs-type=git, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4)
Jan 31 10:06:31 compute-0 podman[241418]: 2026-01-31 10:06:31.990957592 +0000 UTC m=+0.107284881 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:06:35 compute-0 nova_compute[185194]: 2026-01-31 10:06:35.168 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:36 compute-0 nova_compute[185194]: 2026-01-31 10:06:36.830 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:39 compute-0 podman[241453]: 2026-01-31 10:06:39.952785646 +0000 UTC m=+0.080094953 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:06:40 compute-0 nova_compute[185194]: 2026-01-31 10:06:40.171 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:41 compute-0 nova_compute[185194]: 2026-01-31 10:06:41.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:41 compute-0 nova_compute[185194]: 2026-01-31 10:06:41.833 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.968 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.969 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.970 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:06:42 compute-0 nova_compute[185194]: 2026-01-31 10:06:42.970 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:06:44 compute-0 podman[241477]: 2026-01-31 10:06:44.945754686 +0000 UTC m=+0.073138763 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, version=9.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible)
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.174 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.804 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.834 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.835 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.836 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.836 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:45 compute-0 nova_compute[185194]: 2026-01-31 10:06:45.837 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:46 compute-0 nova_compute[185194]: 2026-01-31 10:06:46.835 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:46 compute-0 podman[241498]: 2026-01-31 10:06:46.981789137 +0000 UTC m=+0.090166405 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.635 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.636 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.636 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.637 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.729 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.778 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.780 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.879 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.880 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.941 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:47 compute-0 nova_compute[185194]: 2026-01-31 10:06:47.943 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.012 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.022 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.069 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.071 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.119 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.121 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.168 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.170 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.220 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.227 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.273 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.275 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.325 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.326 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.386 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.387 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.449 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.783 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.785 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4873MB free_disk=72.38047790527344GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.785 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.786 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.858 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.859 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.860 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.860 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.861 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:06:48 compute-0 nova_compute[185194]: 2026-01-31 10:06:48.977 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:06:49 compute-0 nova_compute[185194]: 2026-01-31 10:06:49.002 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:06:49 compute-0 nova_compute[185194]: 2026-01-31 10:06:49.052 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:06:49 compute-0 nova_compute[185194]: 2026-01-31 10:06:49.052 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:06:50 compute-0 nova_compute[185194]: 2026-01-31 10:06:50.178 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:50 compute-0 ovn_controller[97627]: 2026-01-31T10:06:50Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 10:06:51 compute-0 nova_compute[185194]: 2026-01-31 10:06:51.051 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:06:51 compute-0 nova_compute[185194]: 2026-01-31 10:06:51.052 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:06:51 compute-0 nova_compute[185194]: 2026-01-31 10:06:51.838 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:53 compute-0 podman[241557]: 2026-01-31 10:06:53.981528856 +0000 UTC m=+0.109753715 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 10:06:53 compute-0 podman[241558]: 2026-01-31 10:06:53.985564431 +0000 UTC m=+0.109258282 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6)
Jan 31 10:06:55 compute-0 ovn_controller[97627]: 2026-01-31T10:06:55Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:a6:6c 192.168.0.107
Jan 31 10:06:55 compute-0 ovn_controller[97627]: 2026-01-31T10:06:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:a6:6c 192.168.0.107
Jan 31 10:06:55 compute-0 nova_compute[185194]: 2026-01-31 10:06:55.182 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:56 compute-0 nova_compute[185194]: 2026-01-31 10:06:56.842 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:06:57 compute-0 podman[241611]: 2026-01-31 10:06:57.950295582 +0000 UTC m=+0.074892478 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:06:59 compute-0 podman[201068]: time="2026-01-31T10:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:06:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:06:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 31 10:07:00 compute-0 nova_compute[185194]: 2026-01-31 10:07:00.186 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:01 compute-0 openstack_network_exporter[204162]: ERROR   10:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:07:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:07:01 compute-0 openstack_network_exporter[204162]: ERROR   10:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:07:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:07:01 compute-0 nova_compute[185194]: 2026-01-31 10:07:01.845 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:02 compute-0 podman[241633]: 2026-01-31 10:07:02.946657928 +0000 UTC m=+0.070632978 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 10:07:02 compute-0 podman[241632]: 2026-01-31 10:07:02.956458773 +0000 UTC m=+0.086350187 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, release-0.7.12=, vcs-type=git, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 31 10:07:05 compute-0 nova_compute[185194]: 2026-01-31 10:07:05.189 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:06 compute-0 nova_compute[185194]: 2026-01-31 10:07:06.847 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:10 compute-0 nova_compute[185194]: 2026-01-31 10:07:10.191 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:11 compute-0 podman[241669]: 2026-01-31 10:07:11.021795079 +0000 UTC m=+0.129309083 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:07:11 compute-0 nova_compute[185194]: 2026-01-31 10:07:11.851 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:15 compute-0 nova_compute[185194]: 2026-01-31 10:07:15.195 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:15 compute-0 podman[241693]: 2026-01-31 10:07:15.983875153 +0000 UTC m=+0.099070494 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, container_name=openstack_network_exporter, version=9.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, distribution-scope=public, vcs-type=git)
Jan 31 10:07:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:07:16.433 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:07:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:07:16.434 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:07:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:07:16.434 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:07:16 compute-0 nova_compute[185194]: 2026-01-31 10:07:16.856 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:17 compute-0 podman[241715]: 2026-01-31 10:07:17.955721832 +0000 UTC m=+0.080158578 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 10:07:20 compute-0 nova_compute[185194]: 2026-01-31 10:07:20.199 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:21 compute-0 nova_compute[185194]: 2026-01-31 10:07:21.857 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:24 compute-0 podman[241736]: 2026-01-31 10:07:24.949138908 +0000 UTC m=+0.056592775 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 31 10:07:25 compute-0 podman[241735]: 2026-01-31 10:07:25.046019706 +0000 UTC m=+0.152716833 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 10:07:25 compute-0 nova_compute[185194]: 2026-01-31 10:07:25.204 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:26 compute-0 nova_compute[185194]: 2026-01-31 10:07:26.861 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:28 compute-0 podman[241786]: 2026-01-31 10:07:28.948734524 +0000 UTC m=+0.069576802 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:07:29 compute-0 podman[201068]: time="2026-01-31T10:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:07:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:07:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 31 10:07:30 compute-0 nova_compute[185194]: 2026-01-31 10:07:30.208 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:31 compute-0 openstack_network_exporter[204162]: ERROR   10:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:07:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:07:31 compute-0 openstack_network_exporter[204162]: ERROR   10:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:07:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:07:31 compute-0 nova_compute[185194]: 2026-01-31 10:07:31.862 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:33 compute-0 podman[241809]: 2026-01-31 10:07:33.986606041 +0000 UTC m=+0.115561528 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:07:34 compute-0 podman[241810]: 2026-01-31 10:07:34.005338592 +0000 UTC m=+0.116752328 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 31 10:07:35 compute-0 nova_compute[185194]: 2026-01-31 10:07:35.212 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:36 compute-0 nova_compute[185194]: 2026-01-31 10:07:36.865 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:40 compute-0 nova_compute[185194]: 2026-01-31 10:07:40.215 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:41 compute-0 nova_compute[185194]: 2026-01-31 10:07:41.868 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:41 compute-0 podman[241846]: 2026-01-31 10:07:41.981078952 +0000 UTC m=+0.109973368 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:07:42 compute-0 nova_compute[185194]: 2026-01-31 10:07:42.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:43 compute-0 nova_compute[185194]: 2026-01-31 10:07:43.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:43 compute-0 nova_compute[185194]: 2026-01-31 10:07:43.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:07:43 compute-0 nova_compute[185194]: 2026-01-31 10:07:43.800 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:07:43 compute-0 nova_compute[185194]: 2026-01-31 10:07:43.801 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:07:43 compute-0 nova_compute[185194]: 2026-01-31 10:07:43.802 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:07:44 compute-0 nova_compute[185194]: 2026-01-31 10:07:44.757 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:07:44 compute-0 nova_compute[185194]: 2026-01-31 10:07:44.776 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:07:44 compute-0 nova_compute[185194]: 2026-01-31 10:07:44.777 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:07:44 compute-0 nova_compute[185194]: 2026-01-31 10:07:44.778 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:44 compute-0 nova_compute[185194]: 2026-01-31 10:07:44.778 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:45 compute-0 nova_compute[185194]: 2026-01-31 10:07:45.218 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:46 compute-0 nova_compute[185194]: 2026-01-31 10:07:46.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:46 compute-0 nova_compute[185194]: 2026-01-31 10:07:46.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:46 compute-0 nova_compute[185194]: 2026-01-31 10:07:46.871 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:46 compute-0 podman[241869]: 2026-01-31 10:07:46.987728254 +0000 UTC m=+0.113698242 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-01-22T05:09:47Z, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Jan 31 10:07:47 compute-0 nova_compute[185194]: 2026-01-31 10:07:47.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:48 compute-0 nova_compute[185194]: 2026-01-31 10:07:48.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:48 compute-0 podman[241888]: 2026-01-31 10:07:48.969119883 +0000 UTC m=+0.093850092 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.635 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.636 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.637 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.638 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.743 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.807 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.809 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.856 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.857 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.906 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.907 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.957 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:49 compute-0 nova_compute[185194]: 2026-01-31 10:07:49.967 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.026 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.028 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.082 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.083 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.128 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.129 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.177 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.187 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.221 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.235 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.237 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.306 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.308 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.358 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.360 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.439 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.783 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.784 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=72.35893249511719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.784 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.785 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.866 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.866 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.867 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.867 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.867 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.937 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.953 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.955 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:07:50 compute-0 nova_compute[185194]: 2026-01-31 10:07:50.955 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:07:51 compute-0 nova_compute[185194]: 2026-01-31 10:07:51.874 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:52 compute-0 nova_compute[185194]: 2026-01-31 10:07:52.956 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:07:52 compute-0 nova_compute[185194]: 2026-01-31 10:07:52.957 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:07:55 compute-0 nova_compute[185194]: 2026-01-31 10:07:55.226 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:55 compute-0 podman[241943]: 2026-01-31 10:07:55.982176104 +0000 UTC m=+0.107854995 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 10:07:56 compute-0 podman[241944]: 2026-01-31 10:07:56.012711302 +0000 UTC m=+0.127637942 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 10:07:56 compute-0 nova_compute[185194]: 2026-01-31 10:07:56.876 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:07:59 compute-0 podman[201068]: time="2026-01-31T10:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:07:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:07:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 31 10:07:59 compute-0 podman[241989]: 2026-01-31 10:07:59.987269516 +0000 UTC m=+0.098613462 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:08:00 compute-0 nova_compute[185194]: 2026-01-31 10:08:00.229 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:01 compute-0 openstack_network_exporter[204162]: ERROR   10:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:08:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:08:01 compute-0 openstack_network_exporter[204162]: ERROR   10:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:08:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:08:01 compute-0 nova_compute[185194]: 2026-01-31 10:08:01.877 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:04 compute-0 podman[242013]: 2026-01-31 10:08:04.941850868 +0000 UTC m=+0.071115430 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler)
Jan 31 10:08:04 compute-0 podman[242014]: 2026-01-31 10:08:04.95702602 +0000 UTC m=+0.075864590 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 10:08:05 compute-0 nova_compute[185194]: 2026-01-31 10:08:05.234 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:06 compute-0 nova_compute[185194]: 2026-01-31 10:08:06.882 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:10 compute-0 nova_compute[185194]: 2026-01-31 10:08:10.238 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:11 compute-0 nova_compute[185194]: 2026-01-31 10:08:11.888 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:12 compute-0 podman[242052]: 2026-01-31 10:08:12.958069047 +0000 UTC m=+0.065706554 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:08:15 compute-0 nova_compute[185194]: 2026-01-31 10:08:15.242 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:08:16.435 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:08:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:08:16.435 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:08:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:08:16.436 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:08:16 compute-0 nova_compute[185194]: 2026-01-31 10:08:16.892 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:17 compute-0 podman[242075]: 2026-01-31 10:08:17.942748745 +0000 UTC m=+0.073103330 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2026-01-22T05:09:47Z, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1769056855, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:08:19 compute-0 podman[242097]: 2026-01-31 10:08:19.972850521 +0000 UTC m=+0.086670642 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:08:20 compute-0 nova_compute[185194]: 2026-01-31 10:08:20.247 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:21 compute-0 nova_compute[185194]: 2026-01-31 10:08:21.894 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:23 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 10:08:25 compute-0 nova_compute[185194]: 2026-01-31 10:08:25.250 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:26 compute-0 nova_compute[185194]: 2026-01-31 10:08:26.897 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:26 compute-0 podman[242117]: 2026-01-31 10:08:26.982931146 +0000 UTC m=+0.100984021 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:08:27 compute-0 podman[242116]: 2026-01-31 10:08:27.012322606 +0000 UTC m=+0.133731146 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 10:08:29 compute-0 podman[201068]: time="2026-01-31T10:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:08:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:08:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 31 10:08:30 compute-0 nova_compute[185194]: 2026-01-31 10:08:30.255 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.699 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.699 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.706 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.710 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.713 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.713 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:08:30.714195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:08:30.715463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.720 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.724 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.729 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.731 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:08:30.730418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:08:30.731403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.754 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.755 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.755 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.788 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.788 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.788 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.814 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.814 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.814 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.815 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:08:30.815745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.881 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.881 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.882 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.947 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.947 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:30.947 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:30 compute-0 podman[242159]: 2026-01-31 10:08:30.964679893 +0000 UTC m=+0.083187794 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.006 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.006 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.007 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.008 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:08:31.008582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.031 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.055 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.21875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.074 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.076 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.076 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.076 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.076 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.077 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.077 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.077 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 838367565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.077 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 100970814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.078 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 90399626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:08:31.075927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.080 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.080 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.080 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.081 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.082 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.082 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.082 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.082 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.083 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.083 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.083 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.083 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.084 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.085 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:08:31.079955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.085 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.085 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:08:31.081700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:08:31.085331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.087 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.087 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.087 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.088 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.088 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.088 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.089 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.089 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.089 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:08:31.087299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.091 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.091 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.091 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:08:31.091167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.093 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.093 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.093 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.094 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:08:31.093048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.095 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 45 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.095 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:08:31.094772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.097 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.097 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.097 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 1396 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:08:31.096954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.098 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.099 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 40150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.099 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 435530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.099 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 33390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.100 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.101 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.101 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 5004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.101 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.102 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:08:31.099111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.103 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:08:31.101027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:08:31.102813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.103 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.104 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.105 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.105 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.105 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.105 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.106 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.106 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.106 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.107 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.108 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.109 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.109 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4386558050 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.109 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.110 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.110 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 3024146875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.110 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 9621170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.110 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:08:31.104787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:08:31.108586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:08:31.112124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.112 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.113 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.113 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.114 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.114 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:08:31.113994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.114 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.115 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.115 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.115 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.115 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.116 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.116 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.117 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.118 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.118 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.118 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.118 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.119 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.119 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.119 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.120 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:08:31.117447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.121 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.121 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.121 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:08:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:08:31.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:08:31.121109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:08:31 compute-0 openstack_network_exporter[204162]: ERROR   10:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:08:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:08:31 compute-0 openstack_network_exporter[204162]: ERROR   10:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:08:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:08:31 compute-0 nova_compute[185194]: 2026-01-31 10:08:31.899 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:35 compute-0 nova_compute[185194]: 2026-01-31 10:08:35.259 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:35 compute-0 podman[242183]: 2026-01-31 10:08:35.963992389 +0000 UTC m=+0.083363168 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, config_id=kepler, name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Jan 31 10:08:36 compute-0 podman[242184]: 2026-01-31 10:08:36.006445047 +0000 UTC m=+0.129079609 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 10:08:36 compute-0 nova_compute[185194]: 2026-01-31 10:08:36.901 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:40 compute-0 nova_compute[185194]: 2026-01-31 10:08:40.264 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:41 compute-0 nova_compute[185194]: 2026-01-31 10:08:41.903 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:43 compute-0 nova_compute[185194]: 2026-01-31 10:08:43.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:43 compute-0 nova_compute[185194]: 2026-01-31 10:08:43.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:43 compute-0 podman[242219]: 2026-01-31 10:08:43.959659131 +0000 UTC m=+0.088852286 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:08:45 compute-0 nova_compute[185194]: 2026-01-31 10:08:45.271 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:45 compute-0 nova_compute[185194]: 2026-01-31 10:08:45.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:45 compute-0 nova_compute[185194]: 2026-01-31 10:08:45.608 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:08:46 compute-0 nova_compute[185194]: 2026-01-31 10:08:46.869 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:08:46 compute-0 nova_compute[185194]: 2026-01-31 10:08:46.870 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:08:46 compute-0 nova_compute[185194]: 2026-01-31 10:08:46.871 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:08:46 compute-0 nova_compute[185194]: 2026-01-31 10:08:46.906 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:48 compute-0 podman[242243]: 2026-01-31 10:08:48.929722402 +0000 UTC m=+0.060706458 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Jan 31 10:08:50 compute-0 nova_compute[185194]: 2026-01-31 10:08:50.276 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:50 compute-0 podman[242265]: 2026-01-31 10:08:50.95199611 +0000 UTC m=+0.075009818 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.088 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.299 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.307 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.308 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.311 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.312 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.651 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.653 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.653 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.654 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.775 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.843 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.845 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.902 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.915 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.933 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:51 compute-0 nova_compute[185194]: 2026-01-31 10:08:51.999 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.000 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.063 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.072 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.149 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.150 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.215 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.221 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.268 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.269 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.313 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.325 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.390 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.391 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.439 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.441 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.494 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.500 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.543 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.882 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.884 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4793MB free_disk=72.35893249511719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.884 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.885 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.970 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.971 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.971 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.972 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.972 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:08:52 compute-0 nova_compute[185194]: 2026-01-31 10:08:52.989 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.008 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.009 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.024 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.051 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.174 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.209 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.212 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:08:53 compute-0 nova_compute[185194]: 2026-01-31 10:08:53.213 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:08:54 compute-0 nova_compute[185194]: 2026-01-31 10:08:54.213 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:08:54 compute-0 nova_compute[185194]: 2026-01-31 10:08:54.214 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:08:55 compute-0 nova_compute[185194]: 2026-01-31 10:08:55.280 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:56 compute-0 nova_compute[185194]: 2026-01-31 10:08:56.913 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:08:57 compute-0 podman[242320]: 2026-01-31 10:08:57.962620929 +0000 UTC m=+0.077088400 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 31 10:08:58 compute-0 podman[242319]: 2026-01-31 10:08:58.003481487 +0000 UTC m=+0.115773074 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:08:59 compute-0 podman[201068]: time="2026-01-31T10:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:08:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:08:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 31 10:09:00 compute-0 nova_compute[185194]: 2026-01-31 10:09:00.284 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:01 compute-0 openstack_network_exporter[204162]: ERROR   10:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:09:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:09:01 compute-0 openstack_network_exporter[204162]: ERROR   10:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:09:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:09:01 compute-0 nova_compute[185194]: 2026-01-31 10:09:01.915 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:01 compute-0 podman[242365]: 2026-01-31 10:09:01.996746383 +0000 UTC m=+0.107306871 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:09:05 compute-0 nova_compute[185194]: 2026-01-31 10:09:05.289 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:06 compute-0 nova_compute[185194]: 2026-01-31 10:09:06.918 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:06 compute-0 podman[242388]: 2026-01-31 10:09:06.997890776 +0000 UTC m=+0.109222078 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 31 10:09:06 compute-0 podman[242389]: 2026-01-31 10:09:06.999766864 +0000 UTC m=+0.108403299 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Jan 31 10:09:10 compute-0 nova_compute[185194]: 2026-01-31 10:09:10.293 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:11 compute-0 nova_compute[185194]: 2026-01-31 10:09:11.921 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:14 compute-0 podman[242425]: 2026-01-31 10:09:14.789887544 +0000 UTC m=+0.097825252 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:09:15 compute-0 nova_compute[185194]: 2026-01-31 10:09:15.297 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:09:16.435 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:09:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:09:16.437 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:09:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:09:16.438 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:09:16 compute-0 nova_compute[185194]: 2026-01-31 10:09:16.923 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:19 compute-0 podman[242450]: 2026-01-31 10:09:19.969969529 +0000 UTC m=+0.082686032 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, distribution-scope=public, release=1769056855, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.7, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc.)
Jan 31 10:09:20 compute-0 nova_compute[185194]: 2026-01-31 10:09:20.302 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:21 compute-0 nova_compute[185194]: 2026-01-31 10:09:21.925 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:21 compute-0 podman[242472]: 2026-01-31 10:09:21.987786705 +0000 UTC m=+0.104072679 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 10:09:25 compute-0 nova_compute[185194]: 2026-01-31 10:09:25.306 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:26 compute-0 nova_compute[185194]: 2026-01-31 10:09:26.928 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:28 compute-0 podman[242491]: 2026-01-31 10:09:28.975340965 +0000 UTC m=+0.091868212 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:09:29 compute-0 podman[242490]: 2026-01-31 10:09:29.000530692 +0000 UTC m=+0.122045085 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 10:09:29 compute-0 podman[201068]: time="2026-01-31T10:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:09:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:09:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:09:30 compute-0 nova_compute[185194]: 2026-01-31 10:09:30.309 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:31 compute-0 openstack_network_exporter[204162]: ERROR   10:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:09:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:09:31 compute-0 openstack_network_exporter[204162]: ERROR   10:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:09:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:09:31 compute-0 nova_compute[185194]: 2026-01-31 10:09:31.931 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:32 compute-0 podman[242534]: 2026-01-31 10:09:32.961649455 +0000 UTC m=+0.082583077 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:09:35 compute-0 nova_compute[185194]: 2026-01-31 10:09:35.313 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:36 compute-0 nova_compute[185194]: 2026-01-31 10:09:36.933 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:37 compute-0 podman[242558]: 2026-01-31 10:09:37.94504759 +0000 UTC m=+0.066904362 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Jan 31 10:09:37 compute-0 podman[242559]: 2026-01-31 10:09:37.989171095 +0000 UTC m=+0.098326016 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:09:39 compute-0 nova_compute[185194]: 2026-01-31 10:09:39.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:39 compute-0 nova_compute[185194]: 2026-01-31 10:09:39.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 10:09:40 compute-0 nova_compute[185194]: 2026-01-31 10:09:40.316 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:41 compute-0 nova_compute[185194]: 2026-01-31 10:09:41.936 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:44 compute-0 nova_compute[185194]: 2026-01-31 10:09:44.619 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:45 compute-0 podman[242594]: 2026-01-31 10:09:45.010333156 +0000 UTC m=+0.134593491 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:09:45 compute-0 nova_compute[185194]: 2026-01-31 10:09:45.318 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:45 compute-0 nova_compute[185194]: 2026-01-31 10:09:45.625 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.887 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.887 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.887 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.888 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:09:46 compute-0 nova_compute[185194]: 2026-01-31 10:09:46.937 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:47 compute-0 nova_compute[185194]: 2026-01-31 10:09:47.796 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:09:47 compute-0 nova_compute[185194]: 2026-01-31 10:09:47.819 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:09:47 compute-0 nova_compute[185194]: 2026-01-31 10:09:47.820 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:09:47 compute-0 nova_compute[185194]: 2026-01-31 10:09:47.821 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:48 compute-0 nova_compute[185194]: 2026-01-31 10:09:48.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:48 compute-0 nova_compute[185194]: 2026-01-31 10:09:48.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:49 compute-0 nova_compute[185194]: 2026-01-31 10:09:49.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:49 compute-0 nova_compute[185194]: 2026-01-31 10:09:49.677 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:50 compute-0 nova_compute[185194]: 2026-01-31 10:09:50.321 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:50 compute-0 nova_compute[185194]: 2026-01-31 10:09:50.617 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:50 compute-0 nova_compute[185194]: 2026-01-31 10:09:50.618 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 10:09:50 compute-0 nova_compute[185194]: 2026-01-31 10:09:50.640 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 10:09:50 compute-0 podman[242618]: 2026-01-31 10:09:50.975052155 +0000 UTC m=+0.089353919 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-22T05:09:47Z, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, release=1769056855, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git)
Jan 31 10:09:51 compute-0 nova_compute[185194]: 2026-01-31 10:09:51.628 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:51 compute-0 nova_compute[185194]: 2026-01-31 10:09:51.940 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:52 compute-0 nova_compute[185194]: 2026-01-31 10:09:52.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:52 compute-0 nova_compute[185194]: 2026-01-31 10:09:52.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:09:53 compute-0 podman[242638]: 2026-01-31 10:09:53.010136476 +0000 UTC m=+0.126412716 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.635 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.636 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.636 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.638 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.757 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.807 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.809 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.859 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.861 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.909 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.910 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.965 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:53 compute-0 nova_compute[185194]: 2026-01-31 10:09:53.975 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.026 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.027 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.081 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.082 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.132 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.133 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.188 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.194 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.247 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.248 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.295 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.296 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.344 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.346 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.409 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.757 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.759 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4802MB free_disk=72.35502624511719GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.760 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.760 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.930 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.930 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.931 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.931 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:09:54 compute-0 nova_compute[185194]: 2026-01-31 10:09:54.932 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:09:55 compute-0 nova_compute[185194]: 2026-01-31 10:09:55.113 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:09:55 compute-0 nova_compute[185194]: 2026-01-31 10:09:55.128 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:09:55 compute-0 nova_compute[185194]: 2026-01-31 10:09:55.129 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:09:55 compute-0 nova_compute[185194]: 2026-01-31 10:09:55.130 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:09:55 compute-0 nova_compute[185194]: 2026-01-31 10:09:55.323 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:56 compute-0 nova_compute[185194]: 2026-01-31 10:09:56.942 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:09:59 compute-0 podman[201068]: time="2026-01-31T10:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:09:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:09:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 31 10:09:59 compute-0 podman[242693]: 2026-01-31 10:09:59.974954163 +0000 UTC m=+0.098961141 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 10:09:59 compute-0 podman[242694]: 2026-01-31 10:09:59.981690294 +0000 UTC m=+0.098674455 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 31 10:10:00 compute-0 nova_compute[185194]: 2026-01-31 10:10:00.327 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:01 compute-0 openstack_network_exporter[204162]: ERROR   10:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:10:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:10:01 compute-0 openstack_network_exporter[204162]: ERROR   10:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:10:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:10:01 compute-0 nova_compute[185194]: 2026-01-31 10:10:01.945 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:03 compute-0 podman[242739]: 2026-01-31 10:10:03.978483039 +0000 UTC m=+0.090397365 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:10:05 compute-0 nova_compute[185194]: 2026-01-31 10:10:05.329 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:06 compute-0 nova_compute[185194]: 2026-01-31 10:10:06.948 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:08 compute-0 podman[242764]: 2026-01-31 10:10:08.987133111 +0000 UTC m=+0.097871404 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Jan 31 10:10:09 compute-0 podman[242763]: 2026-01-31 10:10:09.008943572 +0000 UTC m=+0.119967422 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, distribution-scope=public, vcs-type=git, release=1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Jan 31 10:10:10 compute-0 nova_compute[185194]: 2026-01-31 10:10:10.333 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:10 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:10.481 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:10:10 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:10.482 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:10:10 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:10.483 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:10 compute-0 nova_compute[185194]: 2026-01-31 10:10:10.487 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:11 compute-0 nova_compute[185194]: 2026-01-31 10:10:11.951 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.871 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.872 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.891 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.976 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.978 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.992 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:10:14 compute-0 nova_compute[185194]: 2026-01-31 10:10:14.993 185198 INFO nova.compute.claims [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.137 185198 DEBUG nova.compute.provider_tree [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.155 185198 DEBUG nova.scheduler.client.report [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.202 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.203 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.242 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.242 185198 DEBUG nova.network.neutron [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.258 185198 INFO nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.287 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.336 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.371 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.372 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.373 185198 INFO nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Creating image(s)
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.374 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.374 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.375 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.388 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.442 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.444 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.446 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.471 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.527 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.529 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.574 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d,backing_fmt=raw /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.576 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.577 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.663 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.665 185198 DEBUG nova.virt.disk.api [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Checking if we can resize image /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.665 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.742 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.744 185198 DEBUG nova.virt.disk.api [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Cannot resize image /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.744 185198 DEBUG nova.objects.instance [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'migration_context' on Instance uuid ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.760 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.761 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.761 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.778 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.833 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.834 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.835 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.844 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.896 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.897 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.947 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.948 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:15 compute-0 nova_compute[185194]: 2026-01-31 10:10:15.949 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:16 compute-0 podman[242820]: 2026-01-31 10:10:16.00068654 +0000 UTC m=+0.114928375 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.008 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.011 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.012 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Ensure instance console log exists: /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.013 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.014 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.015 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:16.437 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:16.438 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:16.439 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:16 compute-0 nova_compute[185194]: 2026-01-31 10:10:16.953 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:19 compute-0 nova_compute[185194]: 2026-01-31 10:10:19.893 185198 DEBUG nova.network.neutron [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Successfully updated port: de051711-0bd4-4c0b-88e5-77353f5ab169 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 10:10:19 compute-0 nova_compute[185194]: 2026-01-31 10:10:19.906 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:10:19 compute-0 nova_compute[185194]: 2026-01-31 10:10:19.907 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:10:19 compute-0 nova_compute[185194]: 2026-01-31 10:10:19.907 185198 DEBUG nova.network.neutron [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:10:20 compute-0 nova_compute[185194]: 2026-01-31 10:10:20.010 185198 DEBUG nova.compute.manager [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-changed-de051711-0bd4-4c0b-88e5-77353f5ab169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:10:20 compute-0 nova_compute[185194]: 2026-01-31 10:10:20.011 185198 DEBUG nova.compute.manager [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Refreshing instance network info cache due to event network-changed-de051711-0bd4-4c0b-88e5-77353f5ab169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:10:20 compute-0 nova_compute[185194]: 2026-01-31 10:10:20.012 185198 DEBUG oslo_concurrency.lockutils [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:10:20 compute-0 nova_compute[185194]: 2026-01-31 10:10:20.123 185198 DEBUG nova.network.neutron [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:10:20 compute-0 nova_compute[185194]: 2026-01-31 10:10:20.338 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:21 compute-0 podman[242852]: 2026-01-31 10:10:21.943748573 +0000 UTC m=+0.066753578 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 10:10:21 compute-0 nova_compute[185194]: 2026-01-31 10:10:21.955 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.778 185198 DEBUG nova.network.neutron [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.819 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.820 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance network_info: |[{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.821 185198 DEBUG oslo_concurrency.lockutils [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.822 185198 DEBUG nova.network.neutron [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Refreshing network info cache for port de051711-0bd4-4c0b-88e5-77353f5ab169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.826 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Start _get_guest_xml network_info=[{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 1, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.839 185198 WARNING nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.859 185198 DEBUG nova.virt.libvirt.host [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.861 185198 DEBUG nova.virt.libvirt.host [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.869 185198 DEBUG nova.virt.libvirt.host [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.870 185198 DEBUG nova.virt.libvirt.host [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.870 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.871 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T09:57:04Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='5ace5526-788a-41cf-9e40-e75da8858688',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T09:56:58Z,direct_url=<?>,disk_format='qcow2',id=8b57d666-88c0-4e62-a76a-0d45801ca1a6,min_disk=0,min_ram=0,name='cirros',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T09:57:00Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.872 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.872 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.873 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.873 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.874 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.874 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.874 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.875 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.875 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.876 185198 DEBUG nova.virt.hardware [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.881 185198 DEBUG nova.virt.libvirt.vif [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe',id=4,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-tjsb8u99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:10:15Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDEyNDIyNDQ2MTc3Mjg3MzM2OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 31 10:10:22 compute-0 nova_compute[185194]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDEyNDIyNDQ2MTc3Mjg3MzM2OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.882 185198 DEBUG nova.network.os_vif_util [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.883 185198 DEBUG nova.network.os_vif_util [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.884 185198 DEBUG nova.objects.instance [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'pci_devices' on Instance uuid ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.898 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <uuid>ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5</uuid>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <name>instance-00000004</name>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <memory>524288</memory>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:name>vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe</nova:name>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:10:22</nova:creationTime>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:flavor name="m1.small">
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:memory>512</nova:memory>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:ephemeral>1</nova:ephemeral>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:user uuid="d3342a7282114996b6010246d4ade24e">admin</nova:user>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:project uuid="155389cbed6644acacdbeeb6155adb54">admin</nova:project>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="8b57d666-88c0-4e62-a76a-0d45801ca1a6"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         <nova:port uuid="de051711-0bd4-4c0b-88e5-77353f5ab169">
Jan 31 10:10:22 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="192.168.0.108" ipVersion="4"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <system>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <entry name="serial">ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5</entry>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <entry name="uuid">ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5</entry>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </system>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <os>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </os>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <features>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </features>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <target dev="vdb" bus="virtio"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.config"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:11:49:95"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <target dev="tapde051711-0b"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </interface>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/console.log" append="off"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <video>
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </video>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:10:22 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:10:22 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:10:22 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:10:22 compute-0 nova_compute[185194]: </domain>
Jan 31 10:10:22 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.899 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Preparing to wait for external event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.900 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.900 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.901 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.901 185198 DEBUG nova.virt.libvirt.vif [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe',id=4,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-tjsb8u99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:10:15Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDEyNDIyNDQ2MTc3Mjg3MzM2OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 31 10:10:22 compute-0 nova_compute[185194]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDEyNDIyNDQ2MTc3Mjg3MzM2OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.902 185198 DEBUG nova.network.os_vif_util [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.903 185198 DEBUG nova.network.os_vif_util [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.904 185198 DEBUG os_vif [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.904 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.905 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.905 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.908 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.908 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde051711-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.909 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapde051711-0b, col_values=(('external_ids', {'iface-id': 'de051711-0bd4-4c0b-88e5-77353f5ab169', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:49:95', 'vm-uuid': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.911 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:22 compute-0 NetworkManager[56281]: <info>  [1769854222.9129] manager: (tapde051711-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.914 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.920 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.921 185198 INFO os_vif [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b')
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.979 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.980 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.981 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.981 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No VIF found with MAC fa:16:3e:11:49:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 10:10:22 compute-0 nova_compute[185194]: 2026-01-31 10:10:22.982 185198 INFO nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Using config drive
Jan 31 10:10:23 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:10:22.881 185198 DEBUG nova.virt.libvirt.vif [None req-379e16f8-f9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:10:23 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:10:22.901 185198 DEBUG nova.virt.libvirt.vif [None req-379e16f8-f9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:10:23 compute-0 nova_compute[185194]: 2026-01-31 10:10:23.929 185198 INFO nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Creating config drive at /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.config
Jan 31 10:10:23 compute-0 nova_compute[185194]: 2026-01-31 10:10:23.939 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxrakjh3f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:24 compute-0 podman[242876]: 2026-01-31 10:10:24.00527552 +0000 UTC m=+0.123318127 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.071 185198 DEBUG oslo_concurrency.processutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxrakjh3f" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:24 compute-0 kernel: tapde051711-0b: entered promiscuous mode
Jan 31 10:10:24 compute-0 NetworkManager[56281]: <info>  [1769854224.1379] manager: (tapde051711-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 31 10:10:24 compute-0 ovn_controller[97627]: 2026-01-31T10:10:24Z|00045|binding|INFO|Claiming lport de051711-0bd4-4c0b-88e5-77353f5ab169 for this chassis.
Jan 31 10:10:24 compute-0 ovn_controller[97627]: 2026-01-31T10:10:24Z|00046|binding|INFO|de051711-0bd4-4c0b-88e5-77353f5ab169: Claiming fa:16:3e:11:49:95 192.168.0.108
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.139 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.155 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:49:95 192.168.0.108'], port_security=['fa:16:3e:11:49:95 192.168.0.108'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wbazt7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-port-xfrh66srumvj', 'neutron:cidrs': '192.168.0.108/24', 'neutron:device_id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wbazt7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-port-xfrh66srumvj', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=de051711-0bd4-4c0b-88e5-77353f5ab169) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.158 106883 INFO neutron.agent.ovn.metadata.agent [-] Port de051711-0bd4-4c0b-88e5-77353f5ab169 in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b bound to our chassis
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.160 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.163 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.166 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:24 compute-0 ovn_controller[97627]: 2026-01-31T10:10:24Z|00047|binding|INFO|Setting lport de051711-0bd4-4c0b-88e5-77353f5ab169 up in Southbound
Jan 31 10:10:24 compute-0 ovn_controller[97627]: 2026-01-31T10:10:24Z|00048|binding|INFO|Setting lport de051711-0bd4-4c0b-88e5-77353f5ab169 ovn-installed in OVS
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.170 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.177 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[8f67f22c-0591-46de-861c-c32dd2bde993]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:10:24 compute-0 systemd-udevd[242919]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:10:24 compute-0 systemd-machined[156556]: New machine qemu-4-instance-00000004.
Jan 31 10:10:24 compute-0 NetworkManager[56281]: <info>  [1769854224.1946] device (tapde051711-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 10:10:24 compute-0 NetworkManager[56281]: <info>  [1769854224.1956] device (tapde051711-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 10:10:24 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.211 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[0434342f-c496-43d4-9668-06eb8c7e9b56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.215 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd30ea8-07a1-409a-9cae-80dc4c4f8a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.243 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3adca6-cb0f-4915-87a8-8ea979161c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.263 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2c0289-ffb9-415d-8e10-1a9575101488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 22407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242929, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.277 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[78c02ae4-47d2-45f9-b6a5-7b58342e473e]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374469, 'tstamp': 374469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242932, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374472, 'tstamp': 374472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242932, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.279 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.282 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.283 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.284 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.284 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:10:24 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:10:24.285 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.378 185198 DEBUG nova.network.neutron [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updated VIF entry in instance network info cache for port de051711-0bd4-4c0b-88e5-77353f5ab169. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.379 185198 DEBUG nova.network.neutron [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.397 185198 DEBUG oslo_concurrency.lockutils [req-72eed2c3-a77e-475b-94f2-6e1a21bcc644 req-26f75fdd-f112-486a-b25b-4b276f783ae2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.412 185198 DEBUG nova.compute.manager [req-c07b99ad-459b-437b-b3b6-d0129afb0cca req-9198998a-76c7-4899-aa83-3249b722c098 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.413 185198 DEBUG oslo_concurrency.lockutils [req-c07b99ad-459b-437b-b3b6-d0129afb0cca req-9198998a-76c7-4899-aa83-3249b722c098 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.414 185198 DEBUG oslo_concurrency.lockutils [req-c07b99ad-459b-437b-b3b6-d0129afb0cca req-9198998a-76c7-4899-aa83-3249b722c098 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.414 185198 DEBUG oslo_concurrency.lockutils [req-c07b99ad-459b-437b-b3b6-d0129afb0cca req-9198998a-76c7-4899-aa83-3249b722c098 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.415 185198 DEBUG nova.compute.manager [req-c07b99ad-459b-437b-b3b6-d0129afb0cca req-9198998a-76c7-4899-aa83-3249b722c098 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Processing event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.509 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769854224.508306, ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.509 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] VM Started (Lifecycle Event)
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.511 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.518 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.525 185198 INFO nova.virt.libvirt.driver [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance spawned successfully.
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.525 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.531 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.538 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.553 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.554 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.556 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.557 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.558 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.559 185198 DEBUG nova.virt.libvirt.driver [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.568 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.569 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769854224.5084467, ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.569 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] VM Paused (Lifecycle Event)
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.612 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.618 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769854224.5187078, ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.618 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] VM Resumed (Lifecycle Event)
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.625 185198 INFO nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Took 9.25 seconds to spawn the instance on the hypervisor.
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.626 185198 DEBUG nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.633 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.638 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.666 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.687 185198 INFO nova.compute.manager [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Took 9.75 seconds to build instance.
Jan 31 10:10:24 compute-0 nova_compute[185194]: 2026-01-31 10:10:24.704 185198 DEBUG oslo_concurrency.lockutils [None req-379e16f8-f97a-4629-a12c-e1d7d9087db5 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 10:10:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.521 185198 DEBUG nova.compute.manager [req-50648bd0-e6d2-42d8-885d-5f5af6fbbb9a req-11da4f6b-66d6-40fb-92fc-4f94c8a598f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.521 185198 DEBUG oslo_concurrency.lockutils [req-50648bd0-e6d2-42d8-885d-5f5af6fbbb9a req-11da4f6b-66d6-40fb-92fc-4f94c8a598f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.522 185198 DEBUG oslo_concurrency.lockutils [req-50648bd0-e6d2-42d8-885d-5f5af6fbbb9a req-11da4f6b-66d6-40fb-92fc-4f94c8a598f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.522 185198 DEBUG oslo_concurrency.lockutils [req-50648bd0-e6d2-42d8-885d-5f5af6fbbb9a req-11da4f6b-66d6-40fb-92fc-4f94c8a598f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.522 185198 DEBUG nova.compute.manager [req-50648bd0-e6d2-42d8-885d-5f5af6fbbb9a req-11da4f6b-66d6-40fb-92fc-4f94c8a598f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] No waiting events found dispatching network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.522 185198 WARNING nova.compute.manager [req-50648bd0-e6d2-42d8-885d-5f5af6fbbb9a req-11da4f6b-66d6-40fb-92fc-4f94c8a598f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received unexpected event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 for instance with vm_state active and task_state None.
Jan 31 10:10:26 compute-0 nova_compute[185194]: 2026-01-31 10:10:26.959 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.185 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.221 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid a6212880-427f-4876-8598-06909416bde1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.221 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid 11b288d2-4ade-4790-8f82-165b662f9a1e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.222 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.222 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.225 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "a6212880-427f-4876-8598-06909416bde1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.225 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a6212880-427f-4876-8598-06909416bde1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.226 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.226 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.227 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.227 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.228 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.228 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.311 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a6212880-427f-4876-8598-06909416bde1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.315 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.330 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.332 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:27 compute-0 nova_compute[185194]: 2026-01-31 10:10:27.912 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:29 compute-0 podman[201068]: time="2026-01-31T10:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:10:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:10:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.700 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.700 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.700 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.819 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.822 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.825 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.827 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:10:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:30.828 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:10:30 compute-0 podman[242963]: 2026-01-31 10:10:30.980101971 +0000 UTC m=+0.101184728 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 31 10:10:31 compute-0 podman[242962]: 2026-01-31 10:10:31.016288325 +0000 UTC m=+0.135974856 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 10:10:31 compute-0 openstack_network_exporter[204162]: ERROR   10:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:10:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:10:31 compute-0 openstack_network_exporter[204162]: ERROR   10:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:10:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:10:31 compute-0 nova_compute[185194]: 2026-01-31 10:10:31.963 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.326 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Sat, 31 Jan 2026 10:10:30 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d81674cc-1c76-4d08-b380-c02a63efd2d2 x-openstack-request-id: req-d81674cc-1c76-4d08-b380-c02a63efd2d2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.327 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5", "name": "vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe", "status": "ACTIVE", "tenant_id": "155389cbed6644acacdbeeb6155adb54", "user_id": "d3342a7282114996b6010246d4ade24e", "metadata": {"metering.server_group": "cd99fa32-2992-4cd0-a9a0-648127ea67dc"}, "hostId": "67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d", "image": {"id": "8b57d666-88c0-4e62-a76a-0d45801ca1a6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/8b57d666-88c0-4e62-a76a-0d45801ca1a6"}]}, "flavor": {"id": "5ace5526-788a-41cf-9e40-e75da8858688", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/5ace5526-788a-41cf-9e40-e75da8858688"}]}, "created": "2026-01-31T10:10:13Z", "updated": "2026-01-31T10:10:24Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.108", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:11:49:95"}, {"version": 4, "addr": "192.168.122.250", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:11:49:95"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-31T10:10:24.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.327 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 used request id req-d81674cc-1c76-4d08-b380-c02a63efd2d2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.331 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.331 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:10:32.332065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:10:32.338322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.347 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.353 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.359 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.366 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 / tapde051711-0b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.366 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.369 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:10:32.367839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:10:32.369587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.405 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.406 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.407 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.447 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.448 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.449 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.498 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.499 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.500 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.533 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.534 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.535 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:10:32.539819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.630 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.631 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.632 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.716 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.717 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.718 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.799 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.799 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.800 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.871 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.871 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.872 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.873 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.875 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:10:32.874375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.893 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 nova_compute[185194]: 2026-01-31 10:10:32.914 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.920 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.945 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.973 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.974 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5: ceilometer.compute.pollsters.NoVolumeException
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.974 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.974 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.975 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.975 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.975 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.976 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.976 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.977 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.977 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.978 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 838367565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.978 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 100970814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.979 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 90399626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.979 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 552921225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.980 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.981 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 16475126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.982 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.983 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.983 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.984 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.985 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:10:32.975362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:10:32.983362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.987 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.988 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.988 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.989 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.989 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.990 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.990 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.991 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.991 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.992 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.992 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.993 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.993 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.995 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:10:32.987926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.996 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.996 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.997 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:10:32.996092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.997 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:10:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:32.999 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:10:33.000892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.001 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.001 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.002 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.002 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.002 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.003 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.004 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.004 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.005 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.005 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.006 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.006 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.007 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-31T10:10:33.009599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.010 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.010 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe>]
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.011 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.012 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.012 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.013 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.014 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.014 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.015 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.015 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.015 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.016 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:10:33.011431) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:10:33.014446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:10:33.016825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.017 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.018 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.018 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.019 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:10:33.019640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.020 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.020 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.021 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.021 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.022 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 41780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.022 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 547070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.023 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 35040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.023 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 8070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:10:33.022134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:10:33.024444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.025 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.025 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 7614 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.025 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.026 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.027 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.027 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 2610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.027 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.028 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:10:33.027186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.029 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.030 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.030 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.030 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.031 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.031 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.031 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.032 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.032 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.032 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.033 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.033 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:10:33.029740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.035 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-31T10:10:33.035110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.035 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe>]
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:10:33.036672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.037 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.037 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.037 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.038 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4389268119 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.038 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.038 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.038 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 3024146875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.039 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 9621170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.039 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.039 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.039 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.040 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.041 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:10:33.041628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.042 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.042 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.043 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.043 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.044 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:10:33.044075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.044 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.045 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.045 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.045 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.046 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.046 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.046 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.047 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.047 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.047 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.047 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.049 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.049 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.050 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.050 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.050 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.051 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.051 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.051 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.051 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.052 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.052 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.053 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:10:33.049334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.055 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.055 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.055 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.055 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:10:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:10:33.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:10:33.055001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:10:34 compute-0 podman[243007]: 2026-01-31 10:10:34.998076522 +0000 UTC m=+0.118618968 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:10:36 compute-0 nova_compute[185194]: 2026-01-31 10:10:36.966 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:37 compute-0 nova_compute[185194]: 2026-01-31 10:10:37.918 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:39 compute-0 podman[243032]: 2026-01-31 10:10:39.983895558 +0000 UTC m=+0.097827983 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:10:40 compute-0 podman[243031]: 2026-01-31 10:10:40.005921264 +0000 UTC m=+0.123155632 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler)
Jan 31 10:10:41 compute-0 nova_compute[185194]: 2026-01-31 10:10:41.973 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:42 compute-0 nova_compute[185194]: 2026-01-31 10:10:42.923 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:46 compute-0 nova_compute[185194]: 2026-01-31 10:10:46.648 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:46 compute-0 nova_compute[185194]: 2026-01-31 10:10:46.650 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:10:46 compute-0 podman[243069]: 2026-01-31 10:10:46.950437089 +0000 UTC m=+0.079178041 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:10:46 compute-0 nova_compute[185194]: 2026-01-31 10:10:46.979 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:47 compute-0 nova_compute[185194]: 2026-01-31 10:10:47.022 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:10:47 compute-0 nova_compute[185194]: 2026-01-31 10:10:47.023 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:10:47 compute-0 nova_compute[185194]: 2026-01-31 10:10:47.024 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:10:47 compute-0 nova_compute[185194]: 2026-01-31 10:10:47.926 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:48 compute-0 nova_compute[185194]: 2026-01-31 10:10:48.398 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:10:48 compute-0 nova_compute[185194]: 2026-01-31 10:10:48.415 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:10:48 compute-0 nova_compute[185194]: 2026-01-31 10:10:48.415 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:10:48 compute-0 nova_compute[185194]: 2026-01-31 10:10:48.416 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:48 compute-0 nova_compute[185194]: 2026-01-31 10:10:48.416 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:48 compute-0 nova_compute[185194]: 2026-01-31 10:10:48.416 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:49 compute-0 nova_compute[185194]: 2026-01-31 10:10:49.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:50 compute-0 nova_compute[185194]: 2026-01-31 10:10:50.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:51 compute-0 nova_compute[185194]: 2026-01-31 10:10:51.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:51 compute-0 nova_compute[185194]: 2026-01-31 10:10:51.980 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:52 compute-0 nova_compute[185194]: 2026-01-31 10:10:52.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:52 compute-0 nova_compute[185194]: 2026-01-31 10:10:52.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:10:52 compute-0 nova_compute[185194]: 2026-01-31 10:10:52.929 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:52 compute-0 podman[243093]: 2026-01-31 10:10:52.953657341 +0000 UTC m=+0.078324900 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, distribution-scope=public, managed_by=edpm_ansible, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 31 10:10:54 compute-0 ovn_controller[97627]: 2026-01-31T10:10:54Z|00049|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Jan 31 10:10:54 compute-0 podman[243115]: 2026-01-31 10:10:54.994555868 +0000 UTC m=+0.109568169 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.627 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.628 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.629 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.630 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.732 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.783 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.784 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.830 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.832 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.878 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.884 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.926 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.934 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:55 compute-0 nova_compute[185194]: 2026-01-31 10:10:55.999 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.000 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.085 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.088 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.169 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.172 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.233 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.242 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.330 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.332 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.415 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.416 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.476 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.477 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.541 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.547 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.633 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.634 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.687 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.688 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.737 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.739 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.815 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:10:56 compute-0 nova_compute[185194]: 2026-01-31 10:10:56.982 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.157 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.158 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=72.35397338867188GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.158 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.159 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.308 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.308 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.308 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.309 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.309 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.309 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.436 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.458 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.477 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.477 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:10:57 compute-0 ovn_controller[97627]: 2026-01-31T10:10:57Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:49:95 192.168.0.108
Jan 31 10:10:57 compute-0 ovn_controller[97627]: 2026-01-31T10:10:57Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:49:95 192.168.0.108
Jan 31 10:10:57 compute-0 nova_compute[185194]: 2026-01-31 10:10:57.932 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:10:59 compute-0 podman[201068]: time="2026-01-31T10:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:10:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:10:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 31 10:11:01 compute-0 openstack_network_exporter[204162]: ERROR   10:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:11:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:11:01 compute-0 openstack_network_exporter[204162]: ERROR   10:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:11:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:11:01 compute-0 podman[243198]: 2026-01-31 10:11:01.960160486 +0000 UTC m=+0.068267656 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 10:11:01 compute-0 nova_compute[185194]: 2026-01-31 10:11:01.984 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:01 compute-0 podman[243197]: 2026-01-31 10:11:01.98681693 +0000 UTC m=+0.101195498 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:11:02 compute-0 nova_compute[185194]: 2026-01-31 10:11:02.935 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:05 compute-0 podman[243245]: 2026-01-31 10:11:05.958409108 +0000 UTC m=+0.076373401 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:11:06 compute-0 nova_compute[185194]: 2026-01-31 10:11:06.987 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:07 compute-0 nova_compute[185194]: 2026-01-31 10:11:07.938 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:11 compute-0 podman[243269]: 2026-01-31 10:11:11.025429345 +0000 UTC m=+0.124047095 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:11:11 compute-0 podman[243268]: 2026-01-31 10:11:11.025774154 +0000 UTC m=+0.131199826 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, version=9.4, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc.)
Jan 31 10:11:11 compute-0 nova_compute[185194]: 2026-01-31 10:11:11.989 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:12 compute-0 nova_compute[185194]: 2026-01-31 10:11:12.941 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:11:16.438 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:11:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:11:16.440 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:11:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:11:16.440 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:11:16 compute-0 nova_compute[185194]: 2026-01-31 10:11:16.991 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:17 compute-0 nova_compute[185194]: 2026-01-31 10:11:17.944 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:17 compute-0 podman[243308]: 2026-01-31 10:11:17.966963305 +0000 UTC m=+0.086097047 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:11:21 compute-0 nova_compute[185194]: 2026-01-31 10:11:21.994 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:22 compute-0 nova_compute[185194]: 2026-01-31 10:11:22.946 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:23 compute-0 podman[243334]: 2026-01-31 10:11:23.976746341 +0000 UTC m=+0.089111192 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, version=9.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855)
Jan 31 10:11:25 compute-0 podman[243354]: 2026-01-31 10:11:25.996754621 +0000 UTC m=+0.110360430 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 10:11:26 compute-0 nova_compute[185194]: 2026-01-31 10:11:26.996 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:27 compute-0 nova_compute[185194]: 2026-01-31 10:11:27.949 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:29 compute-0 podman[201068]: time="2026-01-31T10:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:11:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:11:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 31 10:11:31 compute-0 openstack_network_exporter[204162]: ERROR   10:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:11:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:11:31 compute-0 openstack_network_exporter[204162]: ERROR   10:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:11:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:11:32 compute-0 nova_compute[185194]: 2026-01-31 10:11:31.999 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:32 compute-0 nova_compute[185194]: 2026-01-31 10:11:32.953 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:32 compute-0 podman[243374]: 2026-01-31 10:11:32.956206873 +0000 UTC m=+0.077890229 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 10:11:32 compute-0 podman[243373]: 2026-01-31 10:11:32.994923766 +0000 UTC m=+0.118458529 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 10:11:36 compute-0 podman[243418]: 2026-01-31 10:11:36.964984831 +0000 UTC m=+0.081350628 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:11:37 compute-0 nova_compute[185194]: 2026-01-31 10:11:37.001 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:37 compute-0 nova_compute[185194]: 2026-01-31 10:11:37.956 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:41 compute-0 podman[243444]: 2026-01-31 10:11:41.976659033 +0000 UTC m=+0.095175572 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:11:41 compute-0 podman[243443]: 2026-01-31 10:11:41.985787797 +0000 UTC m=+0.109129670 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container)
Jan 31 10:11:42 compute-0 nova_compute[185194]: 2026-01-31 10:11:42.003 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:42 compute-0 nova_compute[185194]: 2026-01-31 10:11:42.959 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:47 compute-0 nova_compute[185194]: 2026-01-31 10:11:47.005 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:47 compute-0 nova_compute[185194]: 2026-01-31 10:11:47.962 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:48 compute-0 nova_compute[185194]: 2026-01-31 10:11:48.477 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:48 compute-0 nova_compute[185194]: 2026-01-31 10:11:48.478 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:11:48 compute-0 podman[243481]: 2026-01-31 10:11:48.95185919 +0000 UTC m=+0.076779030 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:11:49 compute-0 nova_compute[185194]: 2026-01-31 10:11:49.227 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:11:49 compute-0 nova_compute[185194]: 2026-01-31 10:11:49.228 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:11:49 compute-0 nova_compute[185194]: 2026-01-31 10:11:49.228 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.681 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.709 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.709 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.710 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.710 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.710 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:50 compute-0 nova_compute[185194]: 2026-01-31 10:11:50.711 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:52 compute-0 nova_compute[185194]: 2026-01-31 10:11:52.007 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:52 compute-0 nova_compute[185194]: 2026-01-31 10:11:52.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:52 compute-0 nova_compute[185194]: 2026-01-31 10:11:52.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:52 compute-0 nova_compute[185194]: 2026-01-31 10:11:52.629 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:52 compute-0 nova_compute[185194]: 2026-01-31 10:11:52.967 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:54 compute-0 nova_compute[185194]: 2026-01-31 10:11:54.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:54 compute-0 nova_compute[185194]: 2026-01-31 10:11:54.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:11:54 compute-0 podman[243505]: 2026-01-31 10:11:54.953577297 +0000 UTC m=+0.077594761 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, build-date=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, config_id=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.641 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.642 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.642 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.643 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.727 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.777 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.778 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.848 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.849 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.904 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.906 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.973 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:56 compute-0 nova_compute[185194]: 2026-01-31 10:11:56.981 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 podman[243533]: 2026-01-31 10:11:57.004327339 +0000 UTC m=+0.116175961 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.009 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.048 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.049 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.116 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.117 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.192 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.207 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.267 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.276 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.331 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.333 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.390 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.391 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.442 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.444 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.511 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.519 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.586 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.588 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.651 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.652 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.701 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.702 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.764 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:11:57 compute-0 nova_compute[185194]: 2026-01-31 10:11:57.971 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.169 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.170 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4562MB free_disk=72.33250427246094GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.170 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.170 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.287 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.288 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.288 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.289 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.289 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.289 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.376 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.392 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.393 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:11:58 compute-0 nova_compute[185194]: 2026-01-31 10:11:58.394 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:11:59 compute-0 podman[201068]: time="2026-01-31T10:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:11:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:11:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 31 10:12:01 compute-0 openstack_network_exporter[204162]: ERROR   10:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:12:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:12:01 compute-0 openstack_network_exporter[204162]: ERROR   10:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:12:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:12:02 compute-0 nova_compute[185194]: 2026-01-31 10:12:02.012 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:02 compute-0 nova_compute[185194]: 2026-01-31 10:12:02.974 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:03 compute-0 podman[243593]: 2026-01-31 10:12:03.982194802 +0000 UTC m=+0.092845763 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 10:12:04 compute-0 podman[243592]: 2026-01-31 10:12:04.025332168 +0000 UTC m=+0.143057750 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 10:12:07 compute-0 nova_compute[185194]: 2026-01-31 10:12:07.014 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:07 compute-0 nova_compute[185194]: 2026-01-31 10:12:07.976 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:08 compute-0 podman[243637]: 2026-01-31 10:12:08.003614146 +0000 UTC m=+0.120181174 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:12:12 compute-0 nova_compute[185194]: 2026-01-31 10:12:12.015 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:12 compute-0 podman[243658]: 2026-01-31 10:12:12.950487427 +0000 UTC m=+0.080336261 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, distribution-scope=public, container_name=kepler, version=9.4, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Jan 31 10:12:12 compute-0 podman[243659]: 2026-01-31 10:12:12.963266795 +0000 UTC m=+0.083686447 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 10:12:12 compute-0 nova_compute[185194]: 2026-01-31 10:12:12.978 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:12:16.439 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:12:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:12:16.440 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:12:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:12:16.440 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:12:17 compute-0 nova_compute[185194]: 2026-01-31 10:12:17.018 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:17 compute-0 nova_compute[185194]: 2026-01-31 10:12:17.982 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:19 compute-0 podman[243697]: 2026-01-31 10:12:19.968465453 +0000 UTC m=+0.089926348 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:12:20 compute-0 sshd-session[243694]: Connection reset by 147.185.132.70 port 58290 [preauth]
Jan 31 10:12:22 compute-0 nova_compute[185194]: 2026-01-31 10:12:22.021 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:22 compute-0 nova_compute[185194]: 2026-01-31 10:12:22.985 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:25 compute-0 podman[243722]: 2026-01-31 10:12:25.955515254 +0000 UTC m=+0.082999340 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, version=9.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, vcs-type=git, name=ubi9/ubi-minimal)
Jan 31 10:12:26 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 10:12:27 compute-0 nova_compute[185194]: 2026-01-31 10:12:27.022 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:27 compute-0 nova_compute[185194]: 2026-01-31 10:12:27.988 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:27 compute-0 podman[243744]: 2026-01-31 10:12:27.992696769 +0000 UTC m=+0.102247384 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:12:29 compute-0 podman[201068]: time="2026-01-31T10:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:12:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:12:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.700 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.701 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.701 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.719 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.720 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94961fe00>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.721 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.724 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.726 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.728 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.729 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.730 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:12:30.729443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:12:30.730582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.734 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.737 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.740 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.744 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:12:30.743955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:12:30.744967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.767 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.768 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.768 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.787 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.787 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.788 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.807 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.808 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.808 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.833 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.834 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.834 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:12:30.836953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.898 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.899 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.899 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.974 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.974 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:30.975 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.040 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.042 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.043 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.119 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.119 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.120 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.120 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:12:31.121348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.144 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.168 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.190 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.210 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.213 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:12:31.213157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.214 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.214 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.214 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.214 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.215 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.215 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 838367565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.215 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 100970814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.215 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 90399626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.216 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 704054408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.216 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 151704385 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.216 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 127968148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.217 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:12:31.217961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.218 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.218 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.218 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.219 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.219 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.219 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.219 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:12:31.220069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.220 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.220 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.220 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.221 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.221 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.221 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.221 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.222 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.222 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.222 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.222 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.223 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.223 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.224 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:12:31.224140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.224 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.225 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.225 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.225 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.226 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:12:31.226167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.226 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.227 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.227 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.227 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.227 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.228 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.228 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.228 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.228 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.229 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.229 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:12:31.230627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.231 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.231 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.231 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.231 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.232 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.233 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:12:31.232744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.233 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.233 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.233 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.234 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.235 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.235 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:12:31.234920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.235 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.235 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.236 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.236 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.236 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.237 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:12:31.237020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.237 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.238 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.238 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 1438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.238 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.239 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 43280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:12:31.239176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.239 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 548580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.240 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 36590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.240 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 33410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.241 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.241 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.241 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:12:31.241349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.241 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.242 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 7614 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.242 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.242 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:12:31.243498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.243 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.244 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.244 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.244 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.245 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.245 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:12:31.245638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.246 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.246 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.246 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.246 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.247 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.247 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.247 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.247 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.248 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.248 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.248 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.248 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.249 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.250 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:12:31.250001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.250 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.250 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.251 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.251 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4390831409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.251 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.251 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.251 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 3024146875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.252 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 9621170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.252 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.252 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 3787034021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.252 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 12009423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.253 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.253 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.254 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.254 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.254 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:12:31.254334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.255 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.255 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.255 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.255 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.256 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.256 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.256 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.256 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.256 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:12:31.256386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.257 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.257 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.257 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.257 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.258 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.258 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.258 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.258 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.259 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.259 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.260 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:12:31.260485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.261 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.261 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.261 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.261 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.262 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.262 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.262 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.262 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.263 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.263 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:12:31.264584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.265 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.265 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.265 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.266 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.267 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:12:31.268 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:12:31 compute-0 openstack_network_exporter[204162]: ERROR   10:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:12:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:12:31 compute-0 openstack_network_exporter[204162]: ERROR   10:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:12:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:12:32 compute-0 nova_compute[185194]: 2026-01-31 10:12:32.025 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:32 compute-0 nova_compute[185194]: 2026-01-31 10:12:32.990 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:34 compute-0 podman[243766]: 2026-01-31 10:12:34.987000917 +0000 UTC m=+0.105398824 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true)
Jan 31 10:12:34 compute-0 podman[243765]: 2026-01-31 10:12:34.988814364 +0000 UTC m=+0.102396828 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 10:12:37 compute-0 nova_compute[185194]: 2026-01-31 10:12:37.027 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:37 compute-0 nova_compute[185194]: 2026-01-31 10:12:37.994 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:38 compute-0 podman[243811]: 2026-01-31 10:12:38.960895022 +0000 UTC m=+0.082961039 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:12:42 compute-0 nova_compute[185194]: 2026-01-31 10:12:42.030 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:42 compute-0 nova_compute[185194]: 2026-01-31 10:12:42.998 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:43 compute-0 podman[243836]: 2026-01-31 10:12:43.956545278 +0000 UTC m=+0.075014795 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:12:43 compute-0 podman[243835]: 2026-01-31 10:12:43.993328892 +0000 UTC m=+0.107644763 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible)
Jan 31 10:12:47 compute-0 nova_compute[185194]: 2026-01-31 10:12:47.030 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:48 compute-0 nova_compute[185194]: 2026-01-31 10:12:48.001 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.394 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.395 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.396 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.626 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.627 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.628 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:12:49 compute-0 nova_compute[185194]: 2026-01-31 10:12:49.629 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:12:50 compute-0 nova_compute[185194]: 2026-01-31 10:12:50.775 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:12:50 compute-0 nova_compute[185194]: 2026-01-31 10:12:50.805 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:12:50 compute-0 nova_compute[185194]: 2026-01-31 10:12:50.806 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:12:50 compute-0 nova_compute[185194]: 2026-01-31 10:12:50.806 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:50 compute-0 nova_compute[185194]: 2026-01-31 10:12:50.807 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:50 compute-0 nova_compute[185194]: 2026-01-31 10:12:50.807 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:50 compute-0 podman[243876]: 2026-01-31 10:12:50.969670621 +0000 UTC m=+0.091074857 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:12:51 compute-0 nova_compute[185194]: 2026-01-31 10:12:51.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:52 compute-0 nova_compute[185194]: 2026-01-31 10:12:52.032 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:52 compute-0 nova_compute[185194]: 2026-01-31 10:12:52.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:53 compute-0 nova_compute[185194]: 2026-01-31 10:12:53.002 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:53 compute-0 nova_compute[185194]: 2026-01-31 10:12:53.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:56 compute-0 nova_compute[185194]: 2026-01-31 10:12:56.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:56 compute-0 nova_compute[185194]: 2026-01-31 10:12:56.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:12:56 compute-0 podman[243898]: 2026-01-31 10:12:56.707547303 +0000 UTC m=+0.065859830 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 10:12:57 compute-0 nova_compute[185194]: 2026-01-31 10:12:57.033 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.006 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.637 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.638 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.639 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.751 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.807 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.808 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.891 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.892 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.942 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.943 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:58 compute-0 podman[243923]: 2026-01-31 10:12:58.957436194 +0000 UTC m=+0.085551876 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 10:12:58 compute-0 nova_compute[185194]: 2026-01-31 10:12:58.997 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.012 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.067 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.068 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.125 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.125 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.175 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.176 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.229 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.234 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.289 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.289 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.342 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.343 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.404 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.405 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.457 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.464 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.545 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.546 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.602 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.603 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.652 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.653 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:12:59 compute-0 nova_compute[185194]: 2026-01-31 10:12:59.713 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:12:59 compute-0 podman[201068]: time="2026-01-31T10:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:12:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:12:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.061 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.062 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4572MB free_disk=72.33250427246094GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.062 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.063 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.174 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.174 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.174 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.175 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.175 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.175 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.284 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.303 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.304 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:13:00 compute-0 nova_compute[185194]: 2026-01-31 10:13:00.304 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:13:01 compute-0 openstack_network_exporter[204162]: ERROR   10:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:13:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:13:01 compute-0 openstack_network_exporter[204162]: ERROR   10:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:13:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:13:02 compute-0 nova_compute[185194]: 2026-01-31 10:13:02.037 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:03 compute-0 nova_compute[185194]: 2026-01-31 10:13:03.008 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:05 compute-0 podman[243986]: 2026-01-31 10:13:05.972845763 +0000 UTC m=+0.097046841 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:13:05 compute-0 podman[243987]: 2026-01-31 10:13:05.981287049 +0000 UTC m=+0.100152510 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 31 10:13:07 compute-0 nova_compute[185194]: 2026-01-31 10:13:07.039 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:08 compute-0 nova_compute[185194]: 2026-01-31 10:13:08.011 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:10 compute-0 podman[244028]: 2026-01-31 10:13:10.003854171 +0000 UTC m=+0.124348931 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:13:12 compute-0 nova_compute[185194]: 2026-01-31 10:13:12.042 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:13 compute-0 nova_compute[185194]: 2026-01-31 10:13:13.014 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:14 compute-0 podman[244054]: 2026-01-31 10:13:14.752275859 +0000 UTC m=+0.075363964 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:13:14 compute-0 podman[244053]: 2026-01-31 10:13:14.775129745 +0000 UTC m=+0.100631752 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git)
Jan 31 10:13:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:13:16.440 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:13:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:13:16.440 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:13:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:13:16.441 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:13:17 compute-0 nova_compute[185194]: 2026-01-31 10:13:17.044 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:18 compute-0 nova_compute[185194]: 2026-01-31 10:13:18.016 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:21 compute-0 podman[244092]: 2026-01-31 10:13:21.949431997 +0000 UTC m=+0.069431502 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:13:22 compute-0 nova_compute[185194]: 2026-01-31 10:13:22.046 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:23 compute-0 nova_compute[185194]: 2026-01-31 10:13:23.019 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:26 compute-0 podman[244116]: 2026-01-31 10:13:26.954618287 +0000 UTC m=+0.083647957 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, version=9.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 10:13:27 compute-0 nova_compute[185194]: 2026-01-31 10:13:27.047 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:28 compute-0 nova_compute[185194]: 2026-01-31 10:13:28.023 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:29 compute-0 podman[201068]: time="2026-01-31T10:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:13:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:13:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:13:29 compute-0 podman[244138]: 2026-01-31 10:13:29.984315152 +0000 UTC m=+0.097365268 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 10:13:31 compute-0 openstack_network_exporter[204162]: ERROR   10:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:13:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:13:31 compute-0 openstack_network_exporter[204162]: ERROR   10:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:13:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:13:32 compute-0 nova_compute[185194]: 2026-01-31 10:13:32.051 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:33 compute-0 nova_compute[185194]: 2026-01-31 10:13:33.024 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:36 compute-0 podman[244157]: 2026-01-31 10:13:36.976807545 +0000 UTC m=+0.097110602 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 31 10:13:37 compute-0 podman[244156]: 2026-01-31 10:13:37.004374222 +0000 UTC m=+0.126495836 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:13:37 compute-0 nova_compute[185194]: 2026-01-31 10:13:37.053 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:38 compute-0 nova_compute[185194]: 2026-01-31 10:13:38.027 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:40 compute-0 podman[244198]: 2026-01-31 10:13:40.941769949 +0000 UTC m=+0.062183114 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:13:42 compute-0 nova_compute[185194]: 2026-01-31 10:13:42.057 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:43 compute-0 nova_compute[185194]: 2026-01-31 10:13:43.031 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:44 compute-0 podman[244223]: 2026-01-31 10:13:44.974336906 +0000 UTC m=+0.091528376 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 10:13:44 compute-0 podman[244222]: 2026-01-31 10:13:44.977405134 +0000 UTC m=+0.103629742 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, name=ubi9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Jan 31 10:13:47 compute-0 nova_compute[185194]: 2026-01-31 10:13:47.061 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:48 compute-0 nova_compute[185194]: 2026-01-31 10:13:48.035 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:49 compute-0 nova_compute[185194]: 2026-01-31 10:13:49.305 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:49 compute-0 nova_compute[185194]: 2026-01-31 10:13:49.305 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:13:50 compute-0 nova_compute[185194]: 2026-01-31 10:13:50.437 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:13:50 compute-0 nova_compute[185194]: 2026-01-31 10:13:50.438 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:13:50 compute-0 nova_compute[185194]: 2026-01-31 10:13:50.438 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:13:52 compute-0 nova_compute[185194]: 2026-01-31 10:13:52.065 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:52 compute-0 podman[244260]: 2026-01-31 10:13:52.963352635 +0000 UTC m=+0.068218986 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.038 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.053 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [{"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.070 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.070 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.071 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.071 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.071 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.072 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:53 compute-0 nova_compute[185194]: 2026-01-31 10:13:53.641 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:54 compute-0 nova_compute[185194]: 2026-01-31 10:13:54.636 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:57 compute-0 nova_compute[185194]: 2026-01-31 10:13:57.067 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:57 compute-0 nova_compute[185194]: 2026-01-31 10:13:57.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:57 compute-0 nova_compute[185194]: 2026-01-31 10:13:57.604 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:13:57 compute-0 podman[244282]: 2026-01-31 10:13:57.971923024 +0000 UTC m=+0.092983663 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, release=1769056855, name=ubi9/ubi-minimal, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64)
Jan 31 10:13:58 compute-0 nova_compute[185194]: 2026-01-31 10:13:58.041 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.630 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.630 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.631 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.631 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.737 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:13:59 compute-0 podman[201068]: time="2026-01-31T10:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:13:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:13:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4375 "" "Go-http-client/1.1"
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.787 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.788 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.843 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.844 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.902 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.903 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.952 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:13:59 compute-0 nova_compute[185194]: 2026-01-31 10:13:59.959 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.039 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.041 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.103 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.104 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.162 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.164 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.217 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.225 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.285 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.286 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.354 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.356 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.416 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.418 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.473 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.479 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.533 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.534 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.588 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.590 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.646 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.647 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:14:00 compute-0 nova_compute[185194]: 2026-01-31 10:14:00.694 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:14:00 compute-0 podman[244350]: 2026-01-31 10:14:00.926799586 +0000 UTC m=+0.056143842 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.044 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.045 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4531MB free_disk=72.33250427246094GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.045 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.046 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.126 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.126 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.127 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.127 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.127 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.127 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.144 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.160 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.161 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.176 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.196 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.297 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.315 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.316 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:14:01 compute-0 nova_compute[185194]: 2026-01-31 10:14:01.316 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:14:01 compute-0 openstack_network_exporter[204162]: ERROR   10:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:14:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:14:01 compute-0 openstack_network_exporter[204162]: ERROR   10:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:14:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:14:02 compute-0 nova_compute[185194]: 2026-01-31 10:14:02.070 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:03 compute-0 nova_compute[185194]: 2026-01-31 10:14:03.045 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:07 compute-0 nova_compute[185194]: 2026-01-31 10:14:07.073 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:07 compute-0 podman[244368]: 2026-01-31 10:14:07.993712758 +0000 UTC m=+0.112168228 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 10:14:08 compute-0 nova_compute[185194]: 2026-01-31 10:14:08.046 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:08 compute-0 podman[244367]: 2026-01-31 10:14:08.050158795 +0000 UTC m=+0.165761603 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:14:11 compute-0 podman[244410]: 2026-01-31 10:14:11.929116398 +0000 UTC m=+0.055080033 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:14:12 compute-0 nova_compute[185194]: 2026-01-31 10:14:12.075 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:13 compute-0 nova_compute[185194]: 2026-01-31 10:14:13.050 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:15 compute-0 podman[244434]: 2026-01-31 10:14:15.982103053 +0000 UTC m=+0.085912774 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.tags=base rhel9, config_id=kepler, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 31 10:14:16 compute-0 podman[244435]: 2026-01-31 10:14:16.00415072 +0000 UTC m=+0.110218519 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 10:14:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:14:16.441 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:14:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:14:16.441 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:14:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:14:16.443 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:14:17 compute-0 nova_compute[185194]: 2026-01-31 10:14:17.078 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:18 compute-0 nova_compute[185194]: 2026-01-31 10:14:18.055 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:22 compute-0 nova_compute[185194]: 2026-01-31 10:14:22.080 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:23 compute-0 nova_compute[185194]: 2026-01-31 10:14:23.058 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:23 compute-0 podman[244474]: 2026-01-31 10:14:23.942131198 +0000 UTC m=+0.066649127 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:14:27 compute-0 nova_compute[185194]: 2026-01-31 10:14:27.083 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:28 compute-0 nova_compute[185194]: 2026-01-31 10:14:28.061 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:28 compute-0 podman[244498]: 2026-01-31 10:14:28.991497784 +0000 UTC m=+0.110485356 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:14:29 compute-0 podman[201068]: time="2026-01-31T10:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:14:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:14:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.701 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.702 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd948392ab0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.715 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.719 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'name': 'vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.723 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.727 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.730 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:14:30.728929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:14:30.731008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.737 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.742 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.747 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.753 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.754 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.755 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.756 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.757 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:14:30.755386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:14:30.757578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.794 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.795 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.795 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.827 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.829 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.829 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.867 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.868 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.868 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.904 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.905 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.905 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.906 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:14:30.907200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.981 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.982 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:30.982 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.053 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.053 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.054 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.122 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.122 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.122 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.193 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.194 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.194 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.195 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:14:31.195844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.218 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.84375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.244 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/memory.usage volume: 49.03515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.268 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.288 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.289 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:14:31.289950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.290 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.290 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.291 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.291 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 685824902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.291 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 233417132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.291 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.latency volume: 119084332 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.291 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 838367565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.292 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 100970814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.292 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 90399626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.292 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 704054408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.292 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 151704385 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.292 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 127968148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.293 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.294 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.294 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.294 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.294 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.295 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:14:31.294065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.296 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.297 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:14:31.296156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.297 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.297 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.298 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.298 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.298 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.298 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.298 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.299 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.300 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:14:31.299868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.300 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.300 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.301 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.302 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.302 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.302 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.302 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.303 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.303 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.303 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.303 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.303 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.304 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:14:31.301655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.305 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.306 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.306 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.307 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:14:31.305759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:14:31.308248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.308 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.310 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.310 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets volume: 67 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.310 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.310 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:14:31.310041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.312 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.312 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:14:31.311984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.312 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.313 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 44720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:14:31.313813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.314 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/cpu volume: 550080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.314 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 38090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.314 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 34940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:14:31.315530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.315 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.316 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes volume: 7614 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.316 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.316 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:14:31.317335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.318 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.318 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:14:31.319442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.319 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.320 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.320 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.320 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.320 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.320 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.321 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.321 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.321 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.321 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.321 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.322 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:14:31.323373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.323 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.324 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.324 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 4390831409 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.324 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 12475850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.324 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.324 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 3024146875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.325 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 9621170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.325 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.325 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 3787034021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.325 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 12009423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.325 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.326 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.327 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.327 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:14:31.326697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:14:31.328302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.329 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.329 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.329 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.329 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.330 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.330 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.330 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.330 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.330 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.331 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.332 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.332 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:14:31.331778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.332 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.332 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.333 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.333 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.333 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.333 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.333 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.334 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.334 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.335 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.335 14 DEBUG ceilometer.compute.pollsters [-] 11b288d2-4ade-4790-8f82-165b662f9a1e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:14:31.335154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.336 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.336 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:14:31.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:14:31 compute-0 openstack_network_exporter[204162]: ERROR   10:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:14:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:14:31 compute-0 openstack_network_exporter[204162]: ERROR   10:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:14:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:14:31 compute-0 podman[244519]: 2026-01-31 10:14:31.9636823 +0000 UTC m=+0.088683274 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 10:14:32 compute-0 nova_compute[185194]: 2026-01-31 10:14:32.085 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:33 compute-0 nova_compute[185194]: 2026-01-31 10:14:33.065 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:37 compute-0 nova_compute[185194]: 2026-01-31 10:14:37.088 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:38 compute-0 nova_compute[185194]: 2026-01-31 10:14:38.069 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:38 compute-0 podman[244538]: 2026-01-31 10:14:38.973527081 +0000 UTC m=+0.082719993 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 10:14:38 compute-0 podman[244537]: 2026-01-31 10:14:38.991318881 +0000 UTC m=+0.106945166 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 10:14:42 compute-0 nova_compute[185194]: 2026-01-31 10:14:42.090 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:42 compute-0 podman[244579]: 2026-01-31 10:14:42.939891388 +0000 UTC m=+0.059905706 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:14:43 compute-0 nova_compute[185194]: 2026-01-31 10:14:43.072 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:45 compute-0 nova_compute[185194]: 2026-01-31 10:14:45.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:45 compute-0 nova_compute[185194]: 2026-01-31 10:14:45.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 10:14:47 compute-0 nova_compute[185194]: 2026-01-31 10:14:47.092 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:47 compute-0 podman[244604]: 2026-01-31 10:14:47.236357673 +0000 UTC m=+0.075068230 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:14:47 compute-0 podman[244603]: 2026-01-31 10:14:47.248825748 +0000 UTC m=+0.096322487 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, com.redhat.component=ubi9-container, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, release-0.7.12=, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, vendor=Red Hat, Inc.)
Jan 31 10:14:48 compute-0 nova_compute[185194]: 2026-01-31 10:14:48.075 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:49 compute-0 nova_compute[185194]: 2026-01-31 10:14:49.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:49 compute-0 nova_compute[185194]: 2026-01-31 10:14:49.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:14:50 compute-0 nova_compute[185194]: 2026-01-31 10:14:50.468 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:14:50 compute-0 nova_compute[185194]: 2026-01-31 10:14:50.468 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:14:50 compute-0 nova_compute[185194]: 2026-01-31 10:14:50.468 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.096 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.598 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.619 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.620 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.621 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.621 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.621 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:52 compute-0 nova_compute[185194]: 2026-01-31 10:14:52.622 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:53 compute-0 nova_compute[185194]: 2026-01-31 10:14:53.079 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:53 compute-0 nova_compute[185194]: 2026-01-31 10:14:53.617 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:53 compute-0 nova_compute[185194]: 2026-01-31 10:14:53.618 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:55 compute-0 podman[244642]: 2026-01-31 10:14:55.004749874 +0000 UTC m=+0.117254317 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:14:56 compute-0 nova_compute[185194]: 2026-01-31 10:14:56.601 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:57 compute-0 nova_compute[185194]: 2026-01-31 10:14:57.098 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:57 compute-0 nova_compute[185194]: 2026-01-31 10:14:57.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:57 compute-0 nova_compute[185194]: 2026-01-31 10:14:57.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:14:57 compute-0 nova_compute[185194]: 2026-01-31 10:14:57.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:14:57 compute-0 nova_compute[185194]: 2026-01-31 10:14:57.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 10:14:57 compute-0 nova_compute[185194]: 2026-01-31 10:14:57.646 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 10:14:58 compute-0 nova_compute[185194]: 2026-01-31 10:14:58.083 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:14:59 compute-0 podman[201068]: time="2026-01-31T10:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:14:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:14:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:14:59 compute-0 podman[244666]: 2026-01-31 10:14:59.928689858 +0000 UTC m=+0.055979247 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.7, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, container_name=openstack_network_exporter, release=1769056855, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 31 10:15:01 compute-0 openstack_network_exporter[204162]: ERROR   10:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:15:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:15:01 compute-0 openstack_network_exporter[204162]: ERROR   10:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:15:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.647 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.677 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.678 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.679 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.680 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.788 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.869 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.870 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.922 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.923 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.974 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:01 compute-0 nova_compute[185194]: 2026-01-31 10:15:01.975 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.043 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.054 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.100 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.116 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.117 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.175 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.177 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.257 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.258 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.309 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.318 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.368 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.369 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.418 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.420 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.495 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.497 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.549 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.557 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.612 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.613 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.693 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.694 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.746 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.747 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:15:02 compute-0 nova_compute[185194]: 2026-01-31 10:15:02.824 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:15:02 compute-0 podman[244737]: 2026-01-31 10:15:02.968898945 +0000 UTC m=+0.093521157 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.085 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.220 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.221 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4474MB free_disk=72.33250427246094GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.222 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.222 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.514 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.514 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 11b288d2-4ade-4790-8f82-165b662f9a1e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.514 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.514 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.515 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.515 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.792 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.812 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.815 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:15:03 compute-0 nova_compute[185194]: 2026-01-31 10:15:03.815 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.088 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.089 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.090 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.090 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.090 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.093 185198 INFO nova.compute.manager [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Terminating instance
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.094 185198 DEBUG nova.compute.manager [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.103 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 kernel: tapc6014353-db (unregistering): left promiscuous mode
Jan 31 10:15:07 compute-0 NetworkManager[56281]: <info>  [1769854507.1470] device (tapc6014353-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.163 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 ovn_controller[97627]: 2026-01-31T10:15:07Z|00050|binding|INFO|Releasing lport c6014353-db88-4d66-9154-67869a227159 from this chassis (sb_readonly=0)
Jan 31 10:15:07 compute-0 ovn_controller[97627]: 2026-01-31T10:15:07Z|00051|binding|INFO|Setting lport c6014353-db88-4d66-9154-67869a227159 down in Southbound
Jan 31 10:15:07 compute-0 ovn_controller[97627]: 2026-01-31T10:15:07Z|00052|binding|INFO|Removing iface tapc6014353-db ovn-installed in OVS
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.168 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.179 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:ae:e7 192.168.0.231'], port_security=['fa:16:3e:7d:ae:e7 192.168.0.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wbazt7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-port-h5agd3bpn43q', 'neutron:cidrs': '192.168.0.231/24', 'neutron:device_id': '11b288d2-4ade-4790-8f82-165b662f9a1e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wbazt7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-port-h5agd3bpn43q', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=c6014353-db88-4d66-9154-67869a227159) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.179 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.181 106883 INFO neutron.agent.ovn.metadata.agent [-] Port c6014353-db88-4d66-9154-67869a227159 in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b unbound from our chassis
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.183 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 10:15:07 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 31 10:15:07 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 10min 50.042s CPU time.
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.201 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[8fde0f47-0503-414c-a8d8-69446caca0c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:15:07 compute-0 systemd-machined[156556]: Machine qemu-2-instance-00000002 terminated.
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.236 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ab8d07-25b2-4af2-9af4-1ab9e79316e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.241 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[04b7e0c4-a6cd-49af-9155-b3d4871a3856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.275 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[15b4a1db-3f56-42aa-b993-a91e5ccfb0fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.296 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[664e5b32-76bf-4db5-ba0a-789092505582]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 44523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244769, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.316 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[87d28120-dff4-43ac-a809-67882a865750]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374469, 'tstamp': 374469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244770, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374472, 'tstamp': 374472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244770, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.320 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.323 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.332 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.334 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.335 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.335 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:15:07 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:07.335 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.371 185198 INFO nova.virt.libvirt.driver [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance destroyed successfully.
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.372 185198 DEBUG nova.objects.instance [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'resources' on Instance uuid 11b288d2-4ade-4790-8f82-165b662f9a1e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.674 185198 DEBUG nova.virt.libvirt.vif [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:59:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-h4y4dpe7vemg-i3lhkc4b6i7g-vnf-dywsv6smexzq',id=2,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:59:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-5t72byzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:59:46Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzA3MDIzNDQ5MDAzODYwNTgwNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 31 10:15:07 compute-0 nova_compute[185194]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzA3MDIzNDQ5MDAzODYwNTgwNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcwNzAyMzQ0OTAwMzg2MDU4MDY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MDcwMjM0NDkwMDM4NjA1ODA2PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=11b288d2-4ade-4790-8f82-165b662f9a1e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.675 185198 DEBUG nova.network.os_vif_util [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "c6014353-db88-4d66-9154-67869a227159", "address": "fa:16:3e:7d:ae:e7", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6014353-db", "ovs_interfaceid": "c6014353-db88-4d66-9154-67869a227159", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.676 185198 DEBUG nova.network.os_vif_util [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.676 185198 DEBUG os_vif [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 10:15:07 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:15:07.674 185198 DEBUG nova.virt.libvirt.vif [None req-7d0e45dc-ac [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.680 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.680 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6014353-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.683 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.685 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.692 185198 INFO os_vif [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:ae:e7,bridge_name='br-int',has_traffic_filtering=True,id=c6014353-db88-4d66-9154-67869a227159,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc6014353-db')
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.692 185198 INFO nova.virt.libvirt.driver [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Deleting instance files /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e_del
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.693 185198 INFO nova.virt.libvirt.driver [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Deletion of /var/lib/nova/instances/11b288d2-4ade-4790-8f82-165b662f9a1e_del complete
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.794 185198 DEBUG nova.virt.libvirt.host [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.795 185198 INFO nova.virt.libvirt.host [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] UEFI support detected
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.797 185198 INFO nova.compute.manager [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Took 0.70 seconds to destroy the instance on the hypervisor.
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.798 185198 DEBUG oslo.service.loopingcall [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.799 185198 DEBUG nova.compute.manager [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:15:07 compute-0 nova_compute[185194]: 2026-01-31 10:15:07.799 185198 DEBUG nova.network.neutron [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.038 185198 DEBUG nova.compute.manager [req-f1d16565-678c-4e49-a2a9-c88218510082 req-683baf80-3f74-4a26-b360-2232ed91e631 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-vif-unplugged-c6014353-db88-4d66-9154-67869a227159 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.039 185198 DEBUG oslo_concurrency.lockutils [req-f1d16565-678c-4e49-a2a9-c88218510082 req-683baf80-3f74-4a26-b360-2232ed91e631 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.040 185198 DEBUG oslo_concurrency.lockutils [req-f1d16565-678c-4e49-a2a9-c88218510082 req-683baf80-3f74-4a26-b360-2232ed91e631 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.040 185198 DEBUG oslo_concurrency.lockutils [req-f1d16565-678c-4e49-a2a9-c88218510082 req-683baf80-3f74-4a26-b360-2232ed91e631 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.041 185198 DEBUG nova.compute.manager [req-f1d16565-678c-4e49-a2a9-c88218510082 req-683baf80-3f74-4a26-b360-2232ed91e631 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] No waiting events found dispatching network-vif-unplugged-c6014353-db88-4d66-9154-67869a227159 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.043 185198 DEBUG nova.compute.manager [req-f1d16565-678c-4e49-a2a9-c88218510082 req-683baf80-3f74-4a26-b360-2232ed91e631 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-vif-unplugged-c6014353-db88-4d66-9154-67869a227159 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 10:15:08 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:08.376 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:15:08 compute-0 nova_compute[185194]: 2026-01-31 10:15:08.376 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:08 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:08.378 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.614 185198 DEBUG nova.network.neutron [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.647 185198 INFO nova.compute.manager [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Took 1.85 seconds to deallocate network for instance.
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.690 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.691 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.920 185198 DEBUG nova.compute.provider_tree [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.939 185198 DEBUG nova.scheduler.client.report [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.960 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:09 compute-0 podman[244794]: 2026-01-31 10:15:09.976323915 +0000 UTC m=+0.090709825 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 31 10:15:09 compute-0 nova_compute[185194]: 2026-01-31 10:15:09.988 185198 INFO nova.scheduler.client.report [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Deleted allocations for instance 11b288d2-4ade-4790-8f82-165b662f9a1e
Jan 31 10:15:10 compute-0 podman[244793]: 2026-01-31 10:15:10.006026916 +0000 UTC m=+0.127949937 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.069 185198 DEBUG oslo_concurrency.lockutils [None req-7d0e45dc-acca-48a7-bafb-52d79a92836b d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.222 185198 DEBUG nova.compute.manager [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.223 185198 DEBUG oslo_concurrency.lockutils [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.223 185198 DEBUG oslo_concurrency.lockutils [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.223 185198 DEBUG oslo_concurrency.lockutils [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "11b288d2-4ade-4790-8f82-165b662f9a1e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.224 185198 DEBUG nova.compute.manager [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] No waiting events found dispatching network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.224 185198 WARNING nova.compute.manager [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received unexpected event network-vif-plugged-c6014353-db88-4d66-9154-67869a227159 for instance with vm_state deleted and task_state None.
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.224 185198 DEBUG nova.compute.manager [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Received event network-changed-c6014353-db88-4d66-9154-67869a227159 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.224 185198 DEBUG nova.compute.manager [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Refreshing instance network info cache due to event network-changed-c6014353-db88-4d66-9154-67869a227159. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.225 185198 DEBUG oslo_concurrency.lockutils [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.225 185198 DEBUG oslo_concurrency.lockutils [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.225 185198 DEBUG nova.network.neutron [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Refreshing network info cache for port c6014353-db88-4d66-9154-67869a227159 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:15:10 compute-0 nova_compute[185194]: 2026-01-31 10:15:10.457 185198 DEBUG nova.network.neutron [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:15:11 compute-0 nova_compute[185194]: 2026-01-31 10:15:11.045 185198 DEBUG nova.network.neutron [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 31 10:15:11 compute-0 nova_compute[185194]: 2026-01-31 10:15:11.046 185198 DEBUG oslo_concurrency.lockutils [req-1578ea37-3f1f-4af9-9948-ddbaa6e3da3d req-d7e97fc8-6243-457e-9590-5aa4b2725e8e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-11b288d2-4ade-4790-8f82-165b662f9a1e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:15:11 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:11.379 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:15:12 compute-0 nova_compute[185194]: 2026-01-31 10:15:12.105 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:12 compute-0 nova_compute[185194]: 2026-01-31 10:15:12.684 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:13 compute-0 podman[244836]: 2026-01-31 10:15:13.978691509 +0000 UTC m=+0.104746801 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:15:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:16.442 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:15:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:16.443 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:15:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:15:16.443 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:15:17 compute-0 nova_compute[185194]: 2026-01-31 10:15:17.108 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:17 compute-0 nova_compute[185194]: 2026-01-31 10:15:17.688 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:17 compute-0 podman[244861]: 2026-01-31 10:15:17.985818209 +0000 UTC m=+0.102199026 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, container_name=kepler, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=kepler, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 31 10:15:18 compute-0 podman[244862]: 2026-01-31 10:15:18.032824938 +0000 UTC m=+0.137277673 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 10:15:22 compute-0 nova_compute[185194]: 2026-01-31 10:15:22.109 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:22 compute-0 nova_compute[185194]: 2026-01-31 10:15:22.370 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769854507.368503, 11b288d2-4ade-4790-8f82-165b662f9a1e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:15:22 compute-0 nova_compute[185194]: 2026-01-31 10:15:22.370 185198 INFO nova.compute.manager [-] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] VM Stopped (Lifecycle Event)
Jan 31 10:15:22 compute-0 nova_compute[185194]: 2026-01-31 10:15:22.388 185198 DEBUG nova.compute.manager [None req-2d359cb9-d4e5-4880-b10b-c6ed5c0f94e0 - - - - - -] [instance: 11b288d2-4ade-4790-8f82-165b662f9a1e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:15:22 compute-0 nova_compute[185194]: 2026-01-31 10:15:22.691 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:26 compute-0 podman[244900]: 2026-01-31 10:15:26.003646285 +0000 UTC m=+0.115921893 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:15:27 compute-0 nova_compute[185194]: 2026-01-31 10:15:27.112 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:27 compute-0 nova_compute[185194]: 2026-01-31 10:15:27.694 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:29 compute-0 podman[201068]: time="2026-01-31T10:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:15:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:15:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 31 10:15:30 compute-0 podman[244923]: 2026-01-31 10:15:30.95095015 +0000 UTC m=+0.076676520 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1769056855, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible)
Jan 31 10:15:31 compute-0 openstack_network_exporter[204162]: ERROR   10:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:15:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:15:31 compute-0 openstack_network_exporter[204162]: ERROR   10:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:15:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:15:32 compute-0 nova_compute[185194]: 2026-01-31 10:15:32.114 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:32 compute-0 nova_compute[185194]: 2026-01-31 10:15:32.696 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:33 compute-0 podman[244944]: 2026-01-31 10:15:33.995647774 +0000 UTC m=+0.120685953 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent)
Jan 31 10:15:37 compute-0 nova_compute[185194]: 2026-01-31 10:15:37.116 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:37 compute-0 nova_compute[185194]: 2026-01-31 10:15:37.700 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:40 compute-0 podman[244965]: 2026-01-31 10:15:40.961854015 +0000 UTC m=+0.077697226 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 10:15:40 compute-0 podman[244964]: 2026-01-31 10:15:40.983703478 +0000 UTC m=+0.105968082 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:15:41 compute-0 ovn_controller[97627]: 2026-01-31T10:15:41Z|00053|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 10:15:42 compute-0 nova_compute[185194]: 2026-01-31 10:15:42.119 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:42 compute-0 nova_compute[185194]: 2026-01-31 10:15:42.703 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:44 compute-0 podman[245008]: 2026-01-31 10:15:44.720176217 +0000 UTC m=+0.056866800 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:15:47 compute-0 nova_compute[185194]: 2026-01-31 10:15:47.122 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:47 compute-0 nova_compute[185194]: 2026-01-31 10:15:47.705 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:48 compute-0 podman[245031]: 2026-01-31 10:15:48.974969244 +0000 UTC m=+0.090395027 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 31 10:15:48 compute-0 podman[245032]: 2026-01-31 10:15:48.984872335 +0000 UTC m=+0.100088213 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:15:51 compute-0 nova_compute[185194]: 2026-01-31 10:15:51.773 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:51 compute-0 nova_compute[185194]: 2026-01-31 10:15:51.774 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:15:52 compute-0 nova_compute[185194]: 2026-01-31 10:15:52.124 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:52 compute-0 nova_compute[185194]: 2026-01-31 10:15:52.369 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:15:52 compute-0 nova_compute[185194]: 2026-01-31 10:15:52.370 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:15:52 compute-0 nova_compute[185194]: 2026-01-31 10:15:52.370 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:15:52 compute-0 nova_compute[185194]: 2026-01-31 10:15:52.708 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:53 compute-0 nova_compute[185194]: 2026-01-31 10:15:53.597 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:15:53 compute-0 nova_compute[185194]: 2026-01-31 10:15:53.618 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:15:53 compute-0 nova_compute[185194]: 2026-01-31 10:15:53.619 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:15:53 compute-0 nova_compute[185194]: 2026-01-31 10:15:53.619 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:53 compute-0 nova_compute[185194]: 2026-01-31 10:15:53.620 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:53 compute-0 nova_compute[185194]: 2026-01-31 10:15:53.620 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:54 compute-0 nova_compute[185194]: 2026-01-31 10:15:54.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:54 compute-0 nova_compute[185194]: 2026-01-31 10:15:54.652 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:55 compute-0 nova_compute[185194]: 2026-01-31 10:15:55.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:56 compute-0 podman[245070]: 2026-01-31 10:15:56.992069102 +0000 UTC m=+0.110412719 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:15:57 compute-0 nova_compute[185194]: 2026-01-31 10:15:57.127 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:57 compute-0 nova_compute[185194]: 2026-01-31 10:15:57.603 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:57 compute-0 nova_compute[185194]: 2026-01-31 10:15:57.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:15:57 compute-0 nova_compute[185194]: 2026-01-31 10:15:57.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:15:57 compute-0 nova_compute[185194]: 2026-01-31 10:15:57.712 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:15:59 compute-0 podman[201068]: time="2026-01-31T10:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:15:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:15:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 31 10:16:01 compute-0 openstack_network_exporter[204162]: ERROR   10:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:16:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:16:01 compute-0 openstack_network_exporter[204162]: ERROR   10:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:16:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:16:01 compute-0 podman[245093]: 2026-01-31 10:16:01.974537162 +0000 UTC m=+0.089089957 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1769056855, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9/ubi-minimal, version=9.7, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:16:02 compute-0 nova_compute[185194]: 2026-01-31 10:16:02.132 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:02 compute-0 nova_compute[185194]: 2026-01-31 10:16:02.716 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.645 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.759 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.812 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.813 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.863 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.864 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.911 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.912 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.963 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:03 compute-0 nova_compute[185194]: 2026-01-31 10:16:03.971 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.030 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.031 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.082 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.083 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.133 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.134 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.193 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.200 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.259 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.260 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.310 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.311 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.360 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.361 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.425 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.786 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.787 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4679MB free_disk=72.35451889038086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.787 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.788 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.928 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.929 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.929 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.929 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:16:04 compute-0 nova_compute[185194]: 2026-01-31 10:16:04.930 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:16:04 compute-0 podman[245152]: 2026-01-31 10:16:04.998727823 +0000 UTC m=+0.098193739 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:16:05 compute-0 nova_compute[185194]: 2026-01-31 10:16:05.023 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:16:05 compute-0 nova_compute[185194]: 2026-01-31 10:16:05.038 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:16:05 compute-0 nova_compute[185194]: 2026-01-31 10:16:05.062 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:16:05 compute-0 nova_compute[185194]: 2026-01-31 10:16:05.064 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:16:07 compute-0 nova_compute[185194]: 2026-01-31 10:16:07.133 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:07 compute-0 nova_compute[185194]: 2026-01-31 10:16:07.718 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:11 compute-0 podman[245171]: 2026-01-31 10:16:11.998871194 +0000 UTC m=+0.117057808 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:16:12 compute-0 podman[245170]: 2026-01-31 10:16:12.009600007 +0000 UTC m=+0.121516032 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 10:16:12 compute-0 nova_compute[185194]: 2026-01-31 10:16:12.136 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:12 compute-0 nova_compute[185194]: 2026-01-31 10:16:12.722 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:14 compute-0 podman[245212]: 2026-01-31 10:16:14.955923248 +0000 UTC m=+0.082308255 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:16:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:16:16.443 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:16:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:16:16.445 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:16:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:16:16.446 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:16:17 compute-0 nova_compute[185194]: 2026-01-31 10:16:17.139 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:17 compute-0 nova_compute[185194]: 2026-01-31 10:16:17.723 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:19 compute-0 podman[245237]: 2026-01-31 10:16:19.958917631 +0000 UTC m=+0.086493591 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0)
Jan 31 10:16:19 compute-0 podman[245238]: 2026-01-31 10:16:19.996414555 +0000 UTC m=+0.117298354 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 31 10:16:22 compute-0 nova_compute[185194]: 2026-01-31 10:16:22.142 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:22 compute-0 nova_compute[185194]: 2026-01-31 10:16:22.725 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:27 compute-0 nova_compute[185194]: 2026-01-31 10:16:27.145 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:27 compute-0 nova_compute[185194]: 2026-01-31 10:16:27.727 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:27 compute-0 podman[245277]: 2026-01-31 10:16:27.970654513 +0000 UTC m=+0.083283679 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:16:29 compute-0 podman[201068]: time="2026-01-31T10:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:16:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:16:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.702 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.702 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.702 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.707 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba50ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.712 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.716 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.716 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.718 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:16:30.717117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:16:30.718797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.723 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.728 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.731 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.732 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:16:30.732762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:16:30.733720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.756 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.756 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.756 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.787 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.788 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.788 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.822 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.823 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.823 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:16:30.825782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.911 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.912 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:30.913 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.009 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.010 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.010 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.086 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.087 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.087 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.088 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:16:31.088506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.113 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.8125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.134 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.156 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.158 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.159 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.159 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:16:31.159045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.160 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.161 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 838367565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.162 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 100970814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.162 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 90399626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.163 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 704054408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.163 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 151704385 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.163 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 127968148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.166 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.166 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.167 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:16:31.166013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.170 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:16:31.170062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.171 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.171 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.171 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.172 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.172 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.172 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.173 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.173 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.174 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.175 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:16:31.175179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.175 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.176 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.177 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.178 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:16:31.177655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.178 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.178 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.179 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.179 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.179 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.180 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.180 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.180 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.182 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.182 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:16:31.182593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.183 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.183 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.185 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:16:31.184968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.185 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.186 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.187 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.187 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:16:31.187459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.188 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.188 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.189 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.190 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.190 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.190 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:16:31.189862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.191 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.192 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 46300000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.192 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 39650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.193 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 36540000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:16:31.192315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.194 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.195 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:16:31.194856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.195 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.196 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.197 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.197 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.198 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:16:31.197401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.200 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:16:31.200009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.200 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.200 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.201 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.201 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.202 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.202 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.202 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.203 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.205 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.205 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:16:31.204851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.205 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.206 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 3024146875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.206 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 9621170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.207 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.207 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 3787034021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.207 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 12009423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.208 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.209 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.209 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.210 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:16:31.209577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.210 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.211 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.211 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.212 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.212 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.213 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.213 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.214 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.214 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:16:31.212100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.215 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.215 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.215 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.217 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.217 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:16:31.217173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.218 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.218 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.219 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.219 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.219 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.220 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.220 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.221 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.222 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.222 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.222 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.223 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.224 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.225 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:16:31.222130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.226 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:16:31.227 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:16:31 compute-0 openstack_network_exporter[204162]: ERROR   10:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:16:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:16:31 compute-0 openstack_network_exporter[204162]: ERROR   10:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:16:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:16:32 compute-0 nova_compute[185194]: 2026-01-31 10:16:32.147 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:32 compute-0 nova_compute[185194]: 2026-01-31 10:16:32.731 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:32 compute-0 podman[245302]: 2026-01-31 10:16:32.947848018 +0000 UTC m=+0.062189792 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.7, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 31 10:16:35 compute-0 podman[245323]: 2026-01-31 10:16:35.97390064 +0000 UTC m=+0.093629422 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:16:37 compute-0 nova_compute[185194]: 2026-01-31 10:16:37.150 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:37 compute-0 nova_compute[185194]: 2026-01-31 10:16:37.734 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:42 compute-0 nova_compute[185194]: 2026-01-31 10:16:42.154 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:42 compute-0 nova_compute[185194]: 2026-01-31 10:16:42.737 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:42 compute-0 podman[245341]: 2026-01-31 10:16:42.977429576 +0000 UTC m=+0.092878744 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 10:16:43 compute-0 podman[245340]: 2026-01-31 10:16:43.010825105 +0000 UTC m=+0.129329731 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:16:45 compute-0 podman[245385]: 2026-01-31 10:16:45.972633722 +0000 UTC m=+0.091098518 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:16:47 compute-0 nova_compute[185194]: 2026-01-31 10:16:47.156 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:47 compute-0 nova_compute[185194]: 2026-01-31 10:16:47.740 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:50 compute-0 podman[245413]: 2026-01-31 10:16:50.979799952 +0000 UTC m=+0.090077342 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 31 10:16:50 compute-0 podman[245412]: 2026-01-31 10:16:50.980709275 +0000 UTC m=+0.092203736 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Jan 31 10:16:52 compute-0 nova_compute[185194]: 2026-01-31 10:16:52.158 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:52 compute-0 nova_compute[185194]: 2026-01-31 10:16:52.745 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.065 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.066 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.066 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.649 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.650 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.650 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:16:53 compute-0 nova_compute[185194]: 2026-01-31 10:16:53.651 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.235 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.252 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.253 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.253 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.254 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.254 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:55 compute-0 nova_compute[185194]: 2026-01-31 10:16:55.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:56 compute-0 nova_compute[185194]: 2026-01-31 10:16:56.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:57 compute-0 nova_compute[185194]: 2026-01-31 10:16:57.161 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:57 compute-0 nova_compute[185194]: 2026-01-31 10:16:57.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:57 compute-0 nova_compute[185194]: 2026-01-31 10:16:57.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:16:57 compute-0 nova_compute[185194]: 2026-01-31 10:16:57.748 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:16:58 compute-0 podman[245450]: 2026-01-31 10:16:58.971196787 +0000 UTC m=+0.097021609 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:16:59 compute-0 nova_compute[185194]: 2026-01-31 10:16:59.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:16:59 compute-0 podman[201068]: time="2026-01-31T10:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:16:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:16:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 31 10:17:01 compute-0 openstack_network_exporter[204162]: ERROR   10:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:17:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:17:01 compute-0 openstack_network_exporter[204162]: ERROR   10:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:17:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:17:02 compute-0 nova_compute[185194]: 2026-01-31 10:17:02.164 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:02 compute-0 nova_compute[185194]: 2026-01-31 10:17:02.751 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.740 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.741 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.741 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.741 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.854 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.947 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:03 compute-0 nova_compute[185194]: 2026-01-31 10:17:03.948 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:03 compute-0 podman[245474]: 2026-01-31 10:17:03.96085072 +0000 UTC m=+0.083846523 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., version=9.7)
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.017 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.017 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.074 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.076 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.134 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.142 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.217 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.218 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.268 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.270 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.320 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.321 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.376 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.389 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.473 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.475 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.530 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.530 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.581 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.582 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:17:04 compute-0 nova_compute[185194]: 2026-01-31 10:17:04.632 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.035 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.036 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4664MB free_disk=72.35451889038086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.037 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.037 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.114 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.114 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.115 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.115 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.115 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.209 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.226 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.228 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:17:05 compute-0 nova_compute[185194]: 2026-01-31 10:17:05.228 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:17:06 compute-0 podman[245531]: 2026-01-31 10:17:06.979187084 +0000 UTC m=+0.093043548 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 10:17:07 compute-0 nova_compute[185194]: 2026-01-31 10:17:07.166 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:07 compute-0 nova_compute[185194]: 2026-01-31 10:17:07.754 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:12 compute-0 nova_compute[185194]: 2026-01-31 10:17:12.171 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:12 compute-0 nova_compute[185194]: 2026-01-31 10:17:12.757 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:13 compute-0 podman[245550]: 2026-01-31 10:17:13.986396435 +0000 UTC m=+0.113129938 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:17:13 compute-0 podman[245551]: 2026-01-31 10:17:13.997885907 +0000 UTC m=+0.115715914 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 10:17:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:17:16.445 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:17:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:17:16.445 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:17:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:17:16.446 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:17:16 compute-0 podman[245596]: 2026-01-31 10:17:16.969507014 +0000 UTC m=+0.089580630 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:17:17 compute-0 nova_compute[185194]: 2026-01-31 10:17:17.173 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:17 compute-0 nova_compute[185194]: 2026-01-31 10:17:17.760 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:21 compute-0 podman[245622]: 2026-01-31 10:17:21.962630733 +0000 UTC m=+0.081148815 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Jan 31 10:17:21 compute-0 podman[245623]: 2026-01-31 10:17:21.97393113 +0000 UTC m=+0.083860614 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 10:17:22 compute-0 nova_compute[185194]: 2026-01-31 10:17:22.175 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:22 compute-0 nova_compute[185194]: 2026-01-31 10:17:22.764 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:27 compute-0 nova_compute[185194]: 2026-01-31 10:17:27.179 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:27 compute-0 nova_compute[185194]: 2026-01-31 10:17:27.768 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:29 compute-0 podman[201068]: time="2026-01-31T10:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:17:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:17:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 31 10:17:29 compute-0 podman[245659]: 2026-01-31 10:17:29.96219065 +0000 UTC m=+0.084328436 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:17:31 compute-0 openstack_network_exporter[204162]: ERROR   10:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:17:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:17:31 compute-0 openstack_network_exporter[204162]: ERROR   10:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:17:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:17:32 compute-0 nova_compute[185194]: 2026-01-31 10:17:32.181 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:32 compute-0 nova_compute[185194]: 2026-01-31 10:17:32.771 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:34 compute-0 podman[245683]: 2026-01-31 10:17:34.952190053 +0000 UTC m=+0.080333925 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc)
Jan 31 10:17:37 compute-0 nova_compute[185194]: 2026-01-31 10:17:37.185 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:37 compute-0 nova_compute[185194]: 2026-01-31 10:17:37.774 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:37 compute-0 podman[245702]: 2026-01-31 10:17:37.966338019 +0000 UTC m=+0.092086673 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 10:17:42 compute-0 nova_compute[185194]: 2026-01-31 10:17:42.188 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:42 compute-0 nova_compute[185194]: 2026-01-31 10:17:42.777 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:44 compute-0 podman[245721]: 2026-01-31 10:17:44.745300093 +0000 UTC m=+0.074370513 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 31 10:17:44 compute-0 podman[245720]: 2026-01-31 10:17:44.785420334 +0000 UTC m=+0.116466064 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:17:47 compute-0 nova_compute[185194]: 2026-01-31 10:17:47.191 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:47 compute-0 nova_compute[185194]: 2026-01-31 10:17:47.780 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:48 compute-0 podman[245763]: 2026-01-31 10:17:48.113388513 +0000 UTC m=+0.122346433 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:17:52 compute-0 nova_compute[185194]: 2026-01-31 10:17:52.192 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:52 compute-0 nova_compute[185194]: 2026-01-31 10:17:52.783 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:52 compute-0 podman[245789]: 2026-01-31 10:17:52.965318173 +0000 UTC m=+0.067637742 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:17:52 compute-0 podman[245788]: 2026-01-31 10:17:52.987586759 +0000 UTC m=+0.094012982 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, config_id=kepler, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Jan 31 10:17:54 compute-0 nova_compute[185194]: 2026-01-31 10:17:54.229 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:54 compute-0 nova_compute[185194]: 2026-01-31 10:17:54.230 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:17:54 compute-0 nova_compute[185194]: 2026-01-31 10:17:54.710 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:17:54 compute-0 nova_compute[185194]: 2026-01-31 10:17:54.711 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:17:54 compute-0 nova_compute[185194]: 2026-01-31 10:17:54.711 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.100 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.117 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.118 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.119 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.119 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.120 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:56 compute-0 nova_compute[185194]: 2026-01-31 10:17:56.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:57 compute-0 nova_compute[185194]: 2026-01-31 10:17:57.195 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:57 compute-0 nova_compute[185194]: 2026-01-31 10:17:57.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:57 compute-0 nova_compute[185194]: 2026-01-31 10:17:57.787 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:17:58 compute-0 nova_compute[185194]: 2026-01-31 10:17:58.601 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:59 compute-0 nova_compute[185194]: 2026-01-31 10:17:59.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:17:59 compute-0 nova_compute[185194]: 2026-01-31 10:17:59.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:17:59 compute-0 podman[201068]: time="2026-01-31T10:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:17:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:17:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 31 10:18:00 compute-0 podman[245826]: 2026-01-31 10:18:00.958917669 +0000 UTC m=+0.085733230 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:18:01 compute-0 openstack_network_exporter[204162]: ERROR   10:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:18:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:18:01 compute-0 openstack_network_exporter[204162]: ERROR   10:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:18:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:18:01 compute-0 nova_compute[185194]: 2026-01-31 10:18:01.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:02 compute-0 nova_compute[185194]: 2026-01-31 10:18:02.199 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:02 compute-0 nova_compute[185194]: 2026-01-31 10:18:02.790 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.629 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.630 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.630 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.630 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.718 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.768 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.769 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.830 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.832 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.894 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.895 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.951 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:03 compute-0 nova_compute[185194]: 2026-01-31 10:18:03.959 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.010 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.011 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.066 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.067 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.123 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.124 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.178 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.184 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.242 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.243 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.314 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.315 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.363 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.364 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.415 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.771 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.772 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4659MB free_disk=72.35451889038086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.772 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.773 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.886 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.887 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.887 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.887 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.888 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.963 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.980 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.981 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:18:04 compute-0 nova_compute[185194]: 2026-01-31 10:18:04.981 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:18:05 compute-0 podman[245885]: 2026-01-31 10:18:05.996978157 +0000 UTC m=+0.115351949 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, release=1769056855, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter)
Jan 31 10:18:07 compute-0 nova_compute[185194]: 2026-01-31 10:18:07.201 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:07 compute-0 nova_compute[185194]: 2026-01-31 10:18:07.793 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:08 compute-0 podman[245907]: 2026-01-31 10:18:08.951163629 +0000 UTC m=+0.065517151 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:18:12 compute-0 nova_compute[185194]: 2026-01-31 10:18:12.202 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:12 compute-0 nova_compute[185194]: 2026-01-31 10:18:12.796 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:14 compute-0 podman[245924]: 2026-01-31 10:18:14.977334804 +0000 UTC m=+0.100595271 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 10:18:14 compute-0 podman[245925]: 2026-01-31 10:18:14.987794622 +0000 UTC m=+0.105042315 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 10:18:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:18:16.445 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:18:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:18:16.446 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:18:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:18:16.446 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:18:17 compute-0 nova_compute[185194]: 2026-01-31 10:18:17.205 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:17 compute-0 nova_compute[185194]: 2026-01-31 10:18:17.799 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:18 compute-0 podman[245966]: 2026-01-31 10:18:18.965275762 +0000 UTC m=+0.090641014 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:18:22 compute-0 nova_compute[185194]: 2026-01-31 10:18:22.206 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:22 compute-0 nova_compute[185194]: 2026-01-31 10:18:22.802 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:24 compute-0 podman[245993]: 2026-01-31 10:18:24.003464272 +0000 UTC m=+0.103018763 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 31 10:18:24 compute-0 podman[245992]: 2026-01-31 10:18:24.018444776 +0000 UTC m=+0.129907312 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, release-0.7.12=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Jan 31 10:18:27 compute-0 nova_compute[185194]: 2026-01-31 10:18:27.208 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:27 compute-0 nova_compute[185194]: 2026-01-31 10:18:27.805 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:29 compute-0 podman[201068]: time="2026-01-31T10:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:18:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:18:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4389 "" "Go-http-client/1.1"
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.702 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.703 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.703 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9484c3410>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.715 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.718 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'name': 'vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.721 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.721 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:18:30.722038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:18:30.724101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.728 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.733 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.738 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.740 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.742 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:18:30.740838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:18:30.743275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.775 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.776 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.776 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.818 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.819 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.819 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.845 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.846 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.846 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.847 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:18:30.847525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.943 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.944 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:30.945 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.031 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.032 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.032 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.106 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.106 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.107 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.108 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:18:31.108283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.130 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.8125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.151 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.173 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.173 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.174 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.175 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.175 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.175 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 838367565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:18:31.174763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.176 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 100970814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.177 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.latency volume: 90399626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.177 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 704054408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.177 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 151704385 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.178 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 127968148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.179 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.180 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.180 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:18:31.179663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:18:31.182017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.182 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.183 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.183 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.184 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.184 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.184 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.184 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.185 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.185 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.186 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.187 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.187 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.187 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.188 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:18:31.186653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:18:31.188391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.189 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.189 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.189 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.189 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.190 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.190 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.190 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.190 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.191 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.192 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.192 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.192 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.193 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.194 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.194 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.194 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.195 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.196 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.196 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.197 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:18:31.192068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.198 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.199 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 47880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.199 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/cpu volume: 41280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:18:31.194030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.199 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 38140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.200 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.201 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.202 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.202 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.202 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:18:31.195577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:18:31.197380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:18:31.199071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:18:31.200666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:18:31.201930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.204 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.204 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.204 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.204 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.204 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.205 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.205 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.205 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.205 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:18:31.203864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.207 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.207 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.207 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.207 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 3024146875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.207 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 9621170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.208 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.208 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 3787034021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.208 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 12009423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.208 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.209 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.210 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.210 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.210 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.211 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.212 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.212 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.212 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.212 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.213 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.213 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.214 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.214 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.214 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.214 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.214 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.215 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.215 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.215 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.215 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.215 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.216 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.217 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:18:31.207035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.217 14 DEBUG ceilometer.compute.pollsters [-] 37c4cead-85b0-43c5-9ae1-9b6b45d7a497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.217 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:18:31.209823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:18:31.211141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:18:31.214058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:18:31.217024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.218 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.219 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.220 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.221 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.222 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.222 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:18:31.222 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:18:31 compute-0 openstack_network_exporter[204162]: ERROR   10:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:18:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:18:31 compute-0 openstack_network_exporter[204162]: ERROR   10:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:18:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:18:31 compute-0 podman[246030]: 2026-01-31 10:18:31.984122034 +0000 UTC m=+0.096789983 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:18:32 compute-0 nova_compute[185194]: 2026-01-31 10:18:32.211 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:32 compute-0 nova_compute[185194]: 2026-01-31 10:18:32.808 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:36 compute-0 podman[246054]: 2026-01-31 10:18:36.998308891 +0000 UTC m=+0.116029296 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.openshift.expose-services=)
Jan 31 10:18:37 compute-0 nova_compute[185194]: 2026-01-31 10:18:37.214 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:37 compute-0 nova_compute[185194]: 2026-01-31 10:18:37.811 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:39 compute-0 podman[246075]: 2026-01-31 10:18:39.968310081 +0000 UTC m=+0.094774582 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:18:42 compute-0 nova_compute[185194]: 2026-01-31 10:18:42.216 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:42 compute-0 nova_compute[185194]: 2026-01-31 10:18:42.814 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:45 compute-0 podman[246095]: 2026-01-31 10:18:45.972692698 +0000 UTC m=+0.080777642 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2)
Jan 31 10:18:46 compute-0 podman[246094]: 2026-01-31 10:18:46.020327159 +0000 UTC m=+0.136736167 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 10:18:47 compute-0 nova_compute[185194]: 2026-01-31 10:18:47.219 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:47 compute-0 nova_compute[185194]: 2026-01-31 10:18:47.818 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:49 compute-0 podman[246139]: 2026-01-31 10:18:49.986921782 +0000 UTC m=+0.106013570 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:18:52 compute-0 nova_compute[185194]: 2026-01-31 10:18:52.221 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:52 compute-0 nova_compute[185194]: 2026-01-31 10:18:52.821 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:53 compute-0 nova_compute[185194]: 2026-01-31 10:18:53.981 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:54 compute-0 nova_compute[185194]: 2026-01-31 10:18:54.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:54 compute-0 nova_compute[185194]: 2026-01-31 10:18:54.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:18:54 compute-0 podman[246163]: 2026-01-31 10:18:54.740932454 +0000 UTC m=+0.103944466 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:18:54 compute-0 podman[246162]: 2026-01-31 10:18:54.749486743 +0000 UTC m=+0.105126926 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=kepler)
Jan 31 10:18:54 compute-0 nova_compute[185194]: 2026-01-31 10:18:54.811 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:18:54 compute-0 nova_compute[185194]: 2026-01-31 10:18:54.812 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:18:54 compute-0 nova_compute[185194]: 2026-01-31 10:18:54.812 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.224 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.736 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.761 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.762 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.764 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.765 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.767 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:57 compute-0 nova_compute[185194]: 2026-01-31 10:18:57.824 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:18:58 compute-0 nova_compute[185194]: 2026-01-31 10:18:58.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:18:59 compute-0 podman[201068]: time="2026-01-31T10:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:18:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:18:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4391 "" "Go-http-client/1.1"
Jan 31 10:19:01 compute-0 openstack_network_exporter[204162]: ERROR   10:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:19:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:19:01 compute-0 openstack_network_exporter[204162]: ERROR   10:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:19:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:19:01 compute-0 nova_compute[185194]: 2026-01-31 10:19:01.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:01 compute-0 nova_compute[185194]: 2026-01-31 10:19:01.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:19:02 compute-0 nova_compute[185194]: 2026-01-31 10:19:02.227 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:02 compute-0 nova_compute[185194]: 2026-01-31 10:19:02.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:02 compute-0 nova_compute[185194]: 2026-01-31 10:19:02.827 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:02 compute-0 podman[246200]: 2026-01-31 10:19:02.92889696 +0000 UTC m=+0.052073806 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.629 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.629 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.630 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.631 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.733 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.808 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.810 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.858 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.859 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.937 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:03 compute-0 nova_compute[185194]: 2026-01-31 10:19:03.939 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.004 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.012 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.073 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.074 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.150 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.151 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.215 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.216 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.298 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.309 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.374 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.374 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.456 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.458 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.546 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.547 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.615 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:19:04 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.998 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:04.999 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4656MB free_disk=72.35451889038086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.000 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.000 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.076 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.076 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.076 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.077 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.077 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.090 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.107 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.107 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.118 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.135 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.202 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.216 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.217 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:19:05 compute-0 nova_compute[185194]: 2026-01-31 10:19:05.217 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.342 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.343 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.343 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.344 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.344 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.346 185198 INFO nova.compute.manager [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Terminating instance
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.347 185198 DEBUG nova.compute.manager [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:19:06 compute-0 kernel: tapfd2acd14-79 (unregistering): left promiscuous mode
Jan 31 10:19:06 compute-0 NetworkManager[56281]: <info>  [1769854746.4031] device (tapfd2acd14-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.412 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 ovn_controller[97627]: 2026-01-31T10:19:06Z|00054|binding|INFO|Releasing lport fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f from this chassis (sb_readonly=0)
Jan 31 10:19:06 compute-0 ovn_controller[97627]: 2026-01-31T10:19:06Z|00055|binding|INFO|Setting lport fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f down in Southbound
Jan 31 10:19:06 compute-0 ovn_controller[97627]: 2026-01-31T10:19:06Z|00056|binding|INFO|Removing iface tapfd2acd14-79 ovn-installed in OVS
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.418 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.424 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:a6:6c 192.168.0.107'], port_security=['fa:16:3e:f8:a6:6c 192.168.0.107'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wbazt7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-port-d3sykcgvkxbi', 'neutron:cidrs': '192.168.0.107/24', 'neutron:device_id': '37c4cead-85b0-43c5-9ae1-9b6b45d7a497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wbazt7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-port-d3sykcgvkxbi', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.426 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.427 106883 INFO neutron.agent.ovn.metadata.agent [-] Port fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b unbound from our chassis
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.428 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.444 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d48abc-ebb2-40f3-b668-3a095b986fb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:19:06 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 31 10:19:06 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 2min 9.767s CPU time.
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.468 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[488a5445-e823-4fb4-bca6-b7ff410e7016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:19:06 compute-0 systemd-machined[156556]: Machine qemu-3-instance-00000003 terminated.
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.471 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e8440d-465d-4fc1-bf1d-cf1a0fd50922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.495 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[35588e21-ed21-46e9-a395-9ce5e4183fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.508 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[6bfad70a-73d9-45af-9748-6a18959f16f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 44523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246278, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.521 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3857838a-e900-40c9-ad5f-caf2953da03b]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374469, 'tstamp': 374469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246279, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374472, 'tstamp': 374472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246279, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.522 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.524 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.528 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.529 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.529 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.530 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.530 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.570 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.575 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.626 185198 INFO nova.virt.libvirt.driver [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Instance destroyed successfully.
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.627 185198 DEBUG nova.objects.instance [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'resources' on Instance uuid 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.652 185198 DEBUG nova.virt.libvirt.vif [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T10:06:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-ecdlqmaqse46-i43tp3btm4ms-vnf-igd2tqgvqxy3',id=3,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T10:06:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-1vrnt46k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T10:06:22Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09OTA1NzUzMTkyNjUzMTA1MTUyNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 31 10:19:06 compute-0 nova_compute[185194]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09OTA1NzUzMTkyNjUzMTA1MTUyNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTkwNTc1MzE5MjY1MzEwNTE1MjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT05MDU3NTMxOTI2NTMxMDUxNTI1PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=37c4cead-85b0-43c5-9ae1-9b6b45d7a497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.652 185198 DEBUG nova.network.os_vif_util [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.653 185198 DEBUG nova.network.os_vif_util [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.653 185198 DEBUG os_vif [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.655 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.655 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd2acd14-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.657 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.659 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.663 185198 INFO os_vif [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd2acd14-79')
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.663 185198 INFO nova.virt.libvirt.driver [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Deleting instance files /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497_del
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.664 185198 INFO nova.virt.libvirt.driver [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Deletion of /var/lib/nova/instances/37c4cead-85b0-43c5-9ae1-9b6b45d7a497_del complete
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.699 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:19:06 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:06.700 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.701 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.733 185198 INFO nova.compute.manager [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.733 185198 DEBUG oslo.service.loopingcall [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.734 185198 DEBUG nova.compute.manager [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.734 185198 DEBUG nova.network.neutron [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.806 185198 DEBUG nova.compute.manager [req-13975d37-ad13-4f38-96ea-bc06842cf0e1 req-39ab4e02-146b-4f6c-8ddd-58a42e33f8bb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-vif-unplugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.807 185198 DEBUG oslo_concurrency.lockutils [req-13975d37-ad13-4f38-96ea-bc06842cf0e1 req-39ab4e02-146b-4f6c-8ddd-58a42e33f8bb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.807 185198 DEBUG oslo_concurrency.lockutils [req-13975d37-ad13-4f38-96ea-bc06842cf0e1 req-39ab4e02-146b-4f6c-8ddd-58a42e33f8bb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.808 185198 DEBUG oslo_concurrency.lockutils [req-13975d37-ad13-4f38-96ea-bc06842cf0e1 req-39ab4e02-146b-4f6c-8ddd-58a42e33f8bb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.809 185198 DEBUG nova.compute.manager [req-13975d37-ad13-4f38-96ea-bc06842cf0e1 req-39ab4e02-146b-4f6c-8ddd-58a42e33f8bb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] No waiting events found dispatching network-vif-unplugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.809 185198 DEBUG nova.compute.manager [req-13975d37-ad13-4f38-96ea-bc06842cf0e1 req-39ab4e02-146b-4f6c-8ddd-58a42e33f8bb cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-vif-unplugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 10:19:06 compute-0 rsyslogd[235457]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 10:19:06 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:19:06.652 185198 DEBUG nova.virt.libvirt.vif [None req-f4e834a3-d0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:19:06 compute-0 rsyslogd[235457]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.881 185198 DEBUG nova.compute.manager [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-changed-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.882 185198 DEBUG nova.compute.manager [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Refreshing instance network info cache due to event network-changed-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.882 185198 DEBUG oslo_concurrency.lockutils [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.883 185198 DEBUG oslo_concurrency.lockutils [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:19:06 compute-0 nova_compute[185194]: 2026-01-31 10:19:06.884 185198 DEBUG nova.network.neutron [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Refreshing network info cache for port fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:19:07 compute-0 nova_compute[185194]: 2026-01-31 10:19:07.229 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:07 compute-0 nova_compute[185194]: 2026-01-31 10:19:07.892 185198 DEBUG nova.network.neutron [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:19:07 compute-0 nova_compute[185194]: 2026-01-31 10:19:07.910 185198 INFO nova.compute.manager [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Took 1.18 seconds to deallocate network for instance.
Jan 31 10:19:07 compute-0 nova_compute[185194]: 2026-01-31 10:19:07.956 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:07 compute-0 nova_compute[185194]: 2026-01-31 10:19:07.957 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:07 compute-0 podman[246302]: 2026-01-31 10:19:07.957315399 +0000 UTC m=+0.080392512 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.005 185198 DEBUG nova.network.neutron [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updated VIF entry in instance network info cache for port fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.006 185198 DEBUG nova.network.neutron [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Updating instance_info_cache with network_info: [{"id": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "address": "fa:16:3e:f8:a6:6c", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.107", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd2acd14-79", "ovs_interfaceid": "fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.035 185198 DEBUG oslo_concurrency.lockutils [req-896c62b2-471a-431f-8350-5f0a62e121f3 req-bd0f8080-ae84-4e8d-94ec-1e45ae341529 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-37c4cead-85b0-43c5-9ae1-9b6b45d7a497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.057 185198 DEBUG nova.compute.provider_tree [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.073 185198 DEBUG nova.scheduler.client.report [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.097 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.149 185198 INFO nova.scheduler.client.report [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Deleted allocations for instance 37c4cead-85b0-43c5-9ae1-9b6b45d7a497
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.227 185198 DEBUG oslo_concurrency.lockutils [None req-f4e834a3-d0cb-491b-bbc5-8464aa7dbacf d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.886 185198 DEBUG nova.compute.manager [req-65d91fc3-8b99-4da3-9b10-ac939cf63f04 req-60e59786-0aa3-4d9d-b74d-079dba305c03 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.886 185198 DEBUG oslo_concurrency.lockutils [req-65d91fc3-8b99-4da3-9b10-ac939cf63f04 req-60e59786-0aa3-4d9d-b74d-079dba305c03 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.886 185198 DEBUG oslo_concurrency.lockutils [req-65d91fc3-8b99-4da3-9b10-ac939cf63f04 req-60e59786-0aa3-4d9d-b74d-079dba305c03 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.886 185198 DEBUG oslo_concurrency.lockutils [req-65d91fc3-8b99-4da3-9b10-ac939cf63f04 req-60e59786-0aa3-4d9d-b74d-079dba305c03 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "37c4cead-85b0-43c5-9ae1-9b6b45d7a497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.887 185198 DEBUG nova.compute.manager [req-65d91fc3-8b99-4da3-9b10-ac939cf63f04 req-60e59786-0aa3-4d9d-b74d-079dba305c03 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] No waiting events found dispatching network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:19:08 compute-0 nova_compute[185194]: 2026-01-31 10:19:08.887 185198 WARNING nova.compute.manager [req-65d91fc3-8b99-4da3-9b10-ac939cf63f04 req-60e59786-0aa3-4d9d-b74d-079dba305c03 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Received unexpected event network-vif-plugged-fd2acd14-79e8-4b29-ba8d-1c30c6bf8f1f for instance with vm_state deleted and task_state None.
Jan 31 10:19:09 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:09.703 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:19:10 compute-0 podman[246324]: 2026-01-31 10:19:10.995019533 +0000 UTC m=+0.120034429 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 10:19:11 compute-0 nova_compute[185194]: 2026-01-31 10:19:11.658 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:12 compute-0 nova_compute[185194]: 2026-01-31 10:19:12.231 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:16.447 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:16.448 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:19:16.448 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:16 compute-0 nova_compute[185194]: 2026-01-31 10:19:16.662 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:16 compute-0 podman[246344]: 2026-01-31 10:19:16.988163063 +0000 UTC m=+0.097449410 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 10:19:16 compute-0 podman[246343]: 2026-01-31 10:19:16.998891028 +0000 UTC m=+0.113450520 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:19:17 compute-0 nova_compute[185194]: 2026-01-31 10:19:17.234 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:21 compute-0 podman[246388]: 2026-01-31 10:19:21.010551837 +0000 UTC m=+0.117937365 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:19:21 compute-0 nova_compute[185194]: 2026-01-31 10:19:21.624 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769854746.6230779, 37c4cead-85b0-43c5-9ae1-9b6b45d7a497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:19:21 compute-0 nova_compute[185194]: 2026-01-31 10:19:21.624 185198 INFO nova.compute.manager [-] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] VM Stopped (Lifecycle Event)
Jan 31 10:19:21 compute-0 nova_compute[185194]: 2026-01-31 10:19:21.643 185198 DEBUG nova.compute.manager [None req-2ad95528-cc9e-4dbe-b93f-bfa2ca036d41 - - - - - -] [instance: 37c4cead-85b0-43c5-9ae1-9b6b45d7a497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:19:21 compute-0 nova_compute[185194]: 2026-01-31 10:19:21.663 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:22 compute-0 nova_compute[185194]: 2026-01-31 10:19:22.236 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:24 compute-0 podman[246411]: 2026-01-31 10:19:24.997339319 +0000 UTC m=+0.105577508 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, config_id=kepler)
Jan 31 10:19:25 compute-0 podman[246412]: 2026-01-31 10:19:25.01568952 +0000 UTC m=+0.113032599 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:19:26 compute-0 nova_compute[185194]: 2026-01-31 10:19:26.664 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:27 compute-0 nova_compute[185194]: 2026-01-31 10:19:27.238 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:29 compute-0 podman[201068]: time="2026-01-31T10:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:19:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:19:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 31 10:19:31 compute-0 openstack_network_exporter[204162]: ERROR   10:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:19:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:19:31 compute-0 openstack_network_exporter[204162]: ERROR   10:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:19:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:19:31 compute-0 nova_compute[185194]: 2026-01-31 10:19:31.666 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:32 compute-0 nova_compute[185194]: 2026-01-31 10:19:32.240 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:33 compute-0 podman[246449]: 2026-01-31 10:19:33.958506586 +0000 UTC m=+0.071820533 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:19:36 compute-0 nova_compute[185194]: 2026-01-31 10:19:36.668 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:37 compute-0 nova_compute[185194]: 2026-01-31 10:19:37.241 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:39 compute-0 podman[246473]: 2026-01-31 10:19:39.000430762 +0000 UTC m=+0.118154131 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container)
Jan 31 10:19:39 compute-0 ovn_controller[97627]: 2026-01-31T10:19:39Z|00057|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 31 10:19:41 compute-0 nova_compute[185194]: 2026-01-31 10:19:41.671 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:41 compute-0 podman[246495]: 2026-01-31 10:19:41.977165451 +0000 UTC m=+0.094300709 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:19:42 compute-0 nova_compute[185194]: 2026-01-31 10:19:42.244 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:42 compute-0 sshd-session[246514]: Accepted publickey for zuul from 38.102.83.5 port 56794 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 10:19:42 compute-0 systemd-logind[795]: New session 29 of user zuul.
Jan 31 10:19:42 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 31 10:19:42 compute-0 sshd-session[246514]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 10:19:43 compute-0 sudo[246691]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldpcdcxxldipcdrnbcpvwksoyzdchai ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769854782.6839156-59841-13844601481906/AnsiballZ_command.py'
Jan 31 10:19:43 compute-0 sudo[246691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 10:19:43 compute-0 python3[246693]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 10:19:43 compute-0 sudo[246691]: pam_unix(sudo:session): session closed for user root
Jan 31 10:19:46 compute-0 nova_compute[185194]: 2026-01-31 10:19:46.674 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:47 compute-0 nova_compute[185194]: 2026-01-31 10:19:47.246 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:47 compute-0 nova_compute[185194]: 2026-01-31 10:19:47.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:47 compute-0 nova_compute[185194]: 2026-01-31 10:19:47.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 10:19:47 compute-0 podman[246731]: 2026-01-31 10:19:47.995626567 +0000 UTC m=+0.101286089 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260126, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:19:48 compute-0 podman[246730]: 2026-01-31 10:19:48.022835384 +0000 UTC m=+0.138221195 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 10:19:51 compute-0 nova_compute[185194]: 2026-01-31 10:19:51.678 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:51 compute-0 podman[246775]: 2026-01-31 10:19:51.951577575 +0000 UTC m=+0.075451674 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:19:52 compute-0 nova_compute[185194]: 2026-01-31 10:19:52.249 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:53 compute-0 nova_compute[185194]: 2026-01-31 10:19:53.640 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:55 compute-0 nova_compute[185194]: 2026-01-31 10:19:55.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:55 compute-0 nova_compute[185194]: 2026-01-31 10:19:55.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:19:55 compute-0 nova_compute[185194]: 2026-01-31 10:19:55.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:19:55 compute-0 podman[246800]: 2026-01-31 10:19:55.999047072 +0000 UTC m=+0.110396542 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, release-0.7.12=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 31 10:19:56 compute-0 podman[246801]: 2026-01-31 10:19:56.012432125 +0000 UTC m=+0.121050665 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 31 10:19:56 compute-0 nova_compute[185194]: 2026-01-31 10:19:56.680 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:56 compute-0 nova_compute[185194]: 2026-01-31 10:19:56.722 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:19:56 compute-0 nova_compute[185194]: 2026-01-31 10:19:56.722 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:19:56 compute-0 nova_compute[185194]: 2026-01-31 10:19:56.722 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:19:56 compute-0 nova_compute[185194]: 2026-01-31 10:19:56.722 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:19:57 compute-0 nova_compute[185194]: 2026-01-31 10:19:57.252 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.103 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.128 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.129 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.130 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.130 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.131 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.131 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.180 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.180 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.194 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.274 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.275 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.285 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.286 185198 INFO nova.compute.claims [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.433 185198 DEBUG nova.compute.provider_tree [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.456 185198 DEBUG nova.scheduler.client.report [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.523 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.524 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.576 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.614 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:19:59 compute-0 podman[201068]: time="2026-01-31T10:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.752 185198 INFO nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:19:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:19:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4393 "" "Go-http-client/1.1"
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.800 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.909 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.910 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.911 185198 INFO nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Creating image(s)
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.911 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.912 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.912 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.913 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "77bca481b205ef365f4321b105547570655fdb07" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:19:59 compute-0 nova_compute[185194]: 2026-01-31 10:19:59.913 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "77bca481b205ef365f4321b105547570655fdb07" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:00 compute-0 nova_compute[185194]: 2026-01-31 10:20:00.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.128 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.183 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.part --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.184 185198 DEBUG nova.virt.images [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] 3a7f5c30-69eb-44b7-960f-3030fec78432 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.189 185198 DEBUG nova.privsep.utils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.190 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.part /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 openstack_network_exporter[204162]: ERROR   10:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:20:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:20:01 compute-0 openstack_network_exporter[204162]: ERROR   10:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:20:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.474 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.part /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.converted" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.477 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.524 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07.converted --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.525 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "77bca481b205ef365f4321b105547570655fdb07" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.538 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.585 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.586 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "77bca481b205ef365f4321b105547570655fdb07" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.586 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "77bca481b205ef365f4321b105547570655fdb07" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.597 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.644 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.645 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07,backing_fmt=raw /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.683 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.692 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07,backing_fmt=raw /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.692 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "77bca481b205ef365f4321b105547570655fdb07" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.693 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.779 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.782 185198 DEBUG nova.virt.disk.api [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Checking if we can resize image /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.783 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.863 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.864 185198 DEBUG nova.virt.disk.api [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Cannot resize image /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.865 185198 DEBUG nova.objects.instance [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.886 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.887 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.889 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.914 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.994 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.995 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:01 compute-0 nova_compute[185194]: 2026-01-31 10:20:01.996 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.009 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.084 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.085 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.124 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.125 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.125 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.174 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.175 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.175 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Ensure instance console log exists: /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.176 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.176 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.177 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.178 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T10:19:47Z,direct_url=<?>,disk_format='qcow2',id=3a7f5c30-69eb-44b7-960f-3030fec78432,min_disk=0,min_ram=0,name='fvt_testing_image',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T10:19:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '3a7f5c30-69eb-44b7-960f-3030fec78432'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 1, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.185 185198 WARNING nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.194 185198 DEBUG nova.virt.libvirt.host [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.196 185198 DEBUG nova.virt.libvirt.host [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.201 185198 DEBUG nova.virt.libvirt.host [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.202 185198 DEBUG nova.virt.libvirt.host [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.203 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.203 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T10:19:54Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='aa9e8b54-4c7f-49b7-a00b-578054b68ddf',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-31T10:19:47Z,direct_url=<?>,disk_format='qcow2',id=3a7f5c30-69eb-44b7-960f-3030fec78432,min_disk=0,min_ram=0,name='fvt_testing_image',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-31T10:19:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.204 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.204 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.205 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.205 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.206 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.206 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.207 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.207 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.207 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.208 185198 DEBUG nova.virt.hardware [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.214 185198 DEBUG nova.objects.instance [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.230 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <uuid>7ad3b29c-d50b-4410-9b74-d4c6f8211d3a</uuid>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <name>instance-00000005</name>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <memory>524288</memory>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:name>fvt_testing_server</nova:name>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:20:02</nova:creationTime>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:flavor name="fvt_testing_flavor">
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:memory>512</nova:memory>
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:ephemeral>1</nova:ephemeral>
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:user uuid="d3342a7282114996b6010246d4ade24e">admin</nova:user>
Jan 31 10:20:02 compute-0 nova_compute[185194]:         <nova:project uuid="155389cbed6644acacdbeeb6155adb54">admin</nova:project>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="3a7f5c30-69eb-44b7-960f-3030fec78432"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <nova:ports/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <system>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <entry name="serial">7ad3b29c-d50b-4410-9b74-d4c6f8211d3a</entry>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <entry name="uuid">7ad3b29c-d50b-4410-9b74-d4c6f8211d3a</entry>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </system>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <os>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </os>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <features>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </features>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <target dev="vdb" bus="virtio"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.config"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/console.log" append="off"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <video>
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </video>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:20:02 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:20:02 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:20:02 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:20:02 compute-0 nova_compute[185194]: </domain>
Jan 31 10:20:02 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.254 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.295 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.295 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.296 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.296 185198 INFO nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Using config drive
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.419 185198 INFO nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Creating config drive at /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.config
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.424 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg_rwn20i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.548 185198 DEBUG oslo_concurrency.processutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpg_rwn20i" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:02 compute-0 nova_compute[185194]: 2026-01-31 10:20:02.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:02 compute-0 systemd-machined[156556]: New machine qemu-5-instance-00000005.
Jan 31 10:20:02 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.521 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769854803.5209877, 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.523 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] VM Resumed (Lifecycle Event)
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.525 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.526 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.531 185198 INFO nova.virt.libvirt.driver [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Instance spawned successfully.
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.531 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.558 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.564 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.570 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.571 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.571 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.572 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.572 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.573 185198 DEBUG nova.virt.libvirt.driver [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.592 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.593 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769854803.5224984, 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.593 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] VM Started (Lifecycle Event)
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.623 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.629 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.633 185198 INFO nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Took 3.72 seconds to spawn the instance on the hypervisor.
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.634 185198 DEBUG nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.662 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.691 185198 INFO nova.compute.manager [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Took 4.45 seconds to build instance.
Jan 31 10:20:03 compute-0 nova_compute[185194]: 2026-01-31 10:20:03.718 185198 DEBUG oslo_concurrency.lockutils [None req-42918afe-9ec0-4c70-9bb6-b464ca1a028c d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.630 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.631 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.631 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.632 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.773 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:04 compute-0 podman[246906]: 2026-01-31 10:20:04.80821208 +0000 UTC m=+0.104435359 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.827 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.829 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.891 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.895 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.952 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:04 compute-0 nova_compute[185194]: 2026-01-31 10:20:04.953 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.016 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.025 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.078 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.079 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.138 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.139 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.188 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.199 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.246 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.256 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.303 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.304 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.351 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.352 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.419 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.420 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.467 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.780 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.782 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4794MB free_disk=72.34931564331055GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.782 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.782 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.934 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.935 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.935 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.936 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:20:05 compute-0 nova_compute[185194]: 2026-01-31 10:20:05.936 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.149 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.170 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.193 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.194 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.195 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.195 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.226 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.686 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:06 compute-0 nova_compute[185194]: 2026-01-31 10:20:06.835 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:07 compute-0 nova_compute[185194]: 2026-01-31 10:20:07.257 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:09 compute-0 podman[246987]: 2026-01-31 10:20:09.973785049 +0000 UTC m=+0.087677891 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64)
Jan 31 10:20:11 compute-0 nova_compute[185194]: 2026-01-31 10:20:11.691 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:12 compute-0 nova_compute[185194]: 2026-01-31 10:20:12.259 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:12 compute-0 podman[247008]: 2026-01-31 10:20:12.953362763 +0000 UTC m=+0.078633884 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 10:20:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:20:16.448 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:20:16.449 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:20:16.449 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:16 compute-0 nova_compute[185194]: 2026-01-31 10:20:16.693 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.114 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.114 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.115 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.115 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.115 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.116 185198 INFO nova.compute.manager [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Terminating instance
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.117 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "refresh_cache-7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.117 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquired lock "refresh_cache-7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.118 185198 DEBUG nova.network.neutron [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.261 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:17 compute-0 nova_compute[185194]: 2026-01-31 10:20:17.980 185198 DEBUG nova.network.neutron [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:20:19 compute-0 podman[247028]: 2026-01-31 10:20:19.002038325 +0000 UTC m=+0.118425202 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true)
Jan 31 10:20:19 compute-0 podman[247027]: 2026-01-31 10:20:19.003687146 +0000 UTC m=+0.129776326 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.239 185198 DEBUG nova.network.neutron [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.267 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Releasing lock "refresh_cache-7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.268 185198 DEBUG nova.compute.manager [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:20:19 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 31 10:20:19 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.939s CPU time.
Jan 31 10:20:19 compute-0 systemd-machined[156556]: Machine qemu-5-instance-00000005 terminated.
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.540 185198 INFO nova.virt.libvirt.driver [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Instance destroyed successfully.
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.541 185198 DEBUG nova.objects.instance [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'resources' on Instance uuid 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.563 185198 INFO nova.virt.libvirt.driver [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Deleting instance files /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a_del
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.565 185198 INFO nova.virt.libvirt.driver [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Deletion of /var/lib/nova/instances/7ad3b29c-d50b-4410-9b74-d4c6f8211d3a_del complete
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.615 185198 INFO nova.compute.manager [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Took 0.35 seconds to destroy the instance on the hypervisor.
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.616 185198 DEBUG oslo.service.loopingcall [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.617 185198 DEBUG nova.compute.manager [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.617 185198 DEBUG nova.network.neutron [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.953 185198 DEBUG nova.network.neutron [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.969 185198 DEBUG nova.network.neutron [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:20:19 compute-0 nova_compute[185194]: 2026-01-31 10:20:19.984 185198 INFO nova.compute.manager [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Took 0.37 seconds to deallocate network for instance.
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.025 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.026 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.132 185198 DEBUG nova.compute.provider_tree [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.146 185198 DEBUG nova.scheduler.client.report [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.166 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.191 185198 INFO nova.scheduler.client.report [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Deleted allocations for instance 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a
Jan 31 10:20:20 compute-0 nova_compute[185194]: 2026-01-31 10:20:20.270 185198 DEBUG oslo_concurrency.lockutils [None req-3a4fde93-5978-483b-bd28-bb67a52b8007 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "7ad3b29c-d50b-4410-9b74-d4c6f8211d3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:21 compute-0 nova_compute[185194]: 2026-01-31 10:20:21.697 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:22 compute-0 nova_compute[185194]: 2026-01-31 10:20:22.264 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:22 compute-0 podman[247085]: 2026-01-31 10:20:22.973985264 +0000 UTC m=+0.094319017 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:20:26 compute-0 nova_compute[185194]: 2026-01-31 10:20:26.699 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:26 compute-0 podman[247109]: 2026-01-31 10:20:26.8051043 +0000 UTC m=+0.079640429 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, name=ubi9, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:20:26 compute-0 podman[247110]: 2026-01-31 10:20:26.814774112 +0000 UTC m=+0.090349827 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:20:27 compute-0 nova_compute[185194]: 2026-01-31 10:20:27.268 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:29 compute-0 podman[201068]: time="2026-01-31T10:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:20:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:20:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.703 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.703 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.712 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.717 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:20:30.718070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:20:30.719944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.724 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.726 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.729 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:20:30.728369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:20:30.729524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.751 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.752 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.752 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.775 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.776 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.776 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.777 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.777 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.777 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:20:30.777891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.854 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.856 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.856 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.934 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.935 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.935 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.936 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:20:30.936312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.954 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.8125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.978 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:20:30.979355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.979 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.980 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.980 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.980 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 704054408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.980 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 151704385 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.981 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 127968148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.981 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:20:30.982207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.982 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.982 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:20:30.983793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.984 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.984 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.984 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.985 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.985 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.985 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.986 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:20:30.986843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.987 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.987 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.988 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:20:30.988549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.989 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.989 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.989 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.990 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.990 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.990 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.991 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.991 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:20:30.992107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.992 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.992 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.993 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:20:30.993811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.994 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.994 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.995 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:20:30.995729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.996 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.996 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.997 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:20:30.997707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.998 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.998 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.999 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:30.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.000 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 49530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.000 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 39730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:20:30.999691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.001 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:20:31.001447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.002 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.002 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.003 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:20:31.003420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.004 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.004 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.004 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:20:31.005395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.005 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.006 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.006 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.006 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.007 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.007 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.009 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.009 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:20:31.009029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.010 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.010 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 3787034021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.010 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 12009423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.011 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:20:31.012302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.012 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.013 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:20:31.014265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.014 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.015 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.015 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.015 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.016 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.016 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.017 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:20:31.017475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.018 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.018 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.018 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.019 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.019 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.021 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:20:31.020618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.021 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.022 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:20:31.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:20:31 compute-0 openstack_network_exporter[204162]: ERROR   10:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:20:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:20:31 compute-0 openstack_network_exporter[204162]: ERROR   10:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:20:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:20:31 compute-0 nova_compute[185194]: 2026-01-31 10:20:31.702 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:32 compute-0 nova_compute[185194]: 2026-01-31 10:20:32.270 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:34 compute-0 nova_compute[185194]: 2026-01-31 10:20:34.538 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769854819.5359814, 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:20:34 compute-0 nova_compute[185194]: 2026-01-31 10:20:34.539 185198 INFO nova.compute.manager [-] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] VM Stopped (Lifecycle Event)
Jan 31 10:20:34 compute-0 nova_compute[185194]: 2026-01-31 10:20:34.567 185198 DEBUG nova.compute.manager [None req-15c0e0e8-65ef-4ecd-880f-684d3d18894c - - - - - -] [instance: 7ad3b29c-d50b-4410-9b74-d4c6f8211d3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:20:34 compute-0 podman[247150]: 2026-01-31 10:20:34.976381793 +0000 UTC m=+0.089819283 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:20:36 compute-0 nova_compute[185194]: 2026-01-31 10:20:36.705 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:37 compute-0 nova_compute[185194]: 2026-01-31 10:20:37.272 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:40 compute-0 podman[247173]: 2026-01-31 10:20:40.966374494 +0000 UTC m=+0.085988338 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, release=1769056855, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 10:20:41 compute-0 nova_compute[185194]: 2026-01-31 10:20:41.709 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:42 compute-0 nova_compute[185194]: 2026-01-31 10:20:42.275 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:43 compute-0 sshd-session[246517]: Received disconnect from 38.102.83.5 port 56794:11: disconnected by user
Jan 31 10:20:43 compute-0 sshd-session[246517]: Disconnected from user zuul 38.102.83.5 port 56794
Jan 31 10:20:43 compute-0 sshd-session[246514]: pam_unix(sshd:session): session closed for user zuul
Jan 31 10:20:43 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 31 10:20:43 compute-0 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Jan 31 10:20:43 compute-0 systemd-logind[795]: Removed session 29.
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.692 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.722 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid a6212880-427f-4876-8598-06909416bde1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.724 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.724 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "a6212880-427f-4876-8598-06909416bde1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.725 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a6212880-427f-4876-8598-06909416bde1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.725 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.726 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.752 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a6212880-427f-4876-8598-06909416bde1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:43 compute-0 nova_compute[185194]: 2026-01-31 10:20:43.761 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:20:43 compute-0 podman[247195]: 2026-01-31 10:20:43.797737458 +0000 UTC m=+0.096371578 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 10:20:46 compute-0 nova_compute[185194]: 2026-01-31 10:20:46.713 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:47 compute-0 nova_compute[185194]: 2026-01-31 10:20:47.278 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:49 compute-0 podman[247215]: 2026-01-31 10:20:49.964979025 +0000 UTC m=+0.083520366 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 31 10:20:49 compute-0 podman[247214]: 2026-01-31 10:20:49.986549836 +0000 UTC m=+0.101431485 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 10:20:51 compute-0 nova_compute[185194]: 2026-01-31 10:20:51.716 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:52 compute-0 nova_compute[185194]: 2026-01-31 10:20:52.280 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:53 compute-0 podman[247256]: 2026-01-31 10:20:53.974211805 +0000 UTC m=+0.089239099 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:20:54 compute-0 sshd-session[247281]: Accepted publickey for zuul from 38.102.83.5 port 44666 ssh2: RSA SHA256:XoK5buoos6Fm+u3PnPTKe+iwXA5nEfAVzLZkq77rYvQ
Jan 31 10:20:54 compute-0 systemd-logind[795]: New session 30 of user zuul.
Jan 31 10:20:54 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 31 10:20:54 compute-0 sshd-session[247281]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 10:20:55 compute-0 sudo[247458]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctluaxnnecoivmqvfchmxypotskixxvc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769854854.5919318-60602-251240332947425/AnsiballZ_command.py'
Jan 31 10:20:55 compute-0 sudo[247458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 10:20:55 compute-0 python3[247460]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 10:20:55 compute-0 sudo[247458]: pam_unix(sudo:session): session closed for user root
Jan 31 10:20:55 compute-0 nova_compute[185194]: 2026-01-31 10:20:55.639 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:56 compute-0 nova_compute[185194]: 2026-01-31 10:20:56.608 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:56 compute-0 nova_compute[185194]: 2026-01-31 10:20:56.609 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:20:56 compute-0 nova_compute[185194]: 2026-01-31 10:20:56.719 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:56 compute-0 nova_compute[185194]: 2026-01-31 10:20:56.972 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:20:56 compute-0 nova_compute[185194]: 2026-01-31 10:20:56.972 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:20:56 compute-0 nova_compute[185194]: 2026-01-31 10:20:56.973 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:20:56 compute-0 podman[247501]: 2026-01-31 10:20:56.981565395 +0000 UTC m=+0.089793963 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:20:56 compute-0 podman[247500]: 2026-01-31 10:20:56.997059154 +0000 UTC m=+0.108747279 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-type=git, container_name=kepler, distribution-scope=public, version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:20:57 compute-0 nova_compute[185194]: 2026-01-31 10:20:57.284 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:20:58 compute-0 nova_compute[185194]: 2026-01-31 10:20:58.253 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:20:58 compute-0 nova_compute[185194]: 2026-01-31 10:20:58.273 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:20:58 compute-0 nova_compute[185194]: 2026-01-31 10:20:58.274 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:20:58 compute-0 nova_compute[185194]: 2026-01-31 10:20:58.275 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:58 compute-0 nova_compute[185194]: 2026-01-31 10:20:58.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:58 compute-0 nova_compute[185194]: 2026-01-31 10:20:58.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:20:59 compute-0 podman[201068]: time="2026-01-31T10:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:20:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:20:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4388 "" "Go-http-client/1.1"
Jan 31 10:21:00 compute-0 nova_compute[185194]: 2026-01-31 10:21:00.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:21:01 compute-0 openstack_network_exporter[204162]: ERROR   10:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:21:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:21:01 compute-0 openstack_network_exporter[204162]: ERROR   10:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:21:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:21:01 compute-0 nova_compute[185194]: 2026-01-31 10:21:01.723 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:02 compute-0 nova_compute[185194]: 2026-01-31 10:21:02.287 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:02 compute-0 nova_compute[185194]: 2026-01-31 10:21:02.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:21:02 compute-0 nova_compute[185194]: 2026-01-31 10:21:02.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:21:02 compute-0 sudo[247711]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkenbtrowcjqzzwijexbujxbcqnzwzgk ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769854862.3177886-60769-111129649644251/AnsiballZ_command.py'
Jan 31 10:21:02 compute-0 sudo[247711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 10:21:02 compute-0 python3[247713]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 10:21:03 compute-0 sudo[247711]: pam_unix(sudo:session): session closed for user root
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.628 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.629 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.629 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.629 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.759 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.829 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.830 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.913 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.915 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.966 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:04 compute-0 nova_compute[185194]: 2026-01-31 10:21:04.967 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.038 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.046 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.099 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.100 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.152 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.153 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.216 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.217 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.267 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.654 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.656 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=72.35015106201172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.656 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.657 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.779 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.780 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.781 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.781 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.879 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:21:05 compute-0 nova_compute[185194]: 2026-01-31 10:21:05.972 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:21:06 compute-0 nova_compute[185194]: 2026-01-31 10:21:06.031 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:21:06 compute-0 nova_compute[185194]: 2026-01-31 10:21:06.031 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:21:06 compute-0 podman[247777]: 2026-01-31 10:21:06.031041782 +0000 UTC m=+0.140260439 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:21:06 compute-0 nova_compute[185194]: 2026-01-31 10:21:06.726 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:07 compute-0 nova_compute[185194]: 2026-01-31 10:21:07.290 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:11 compute-0 nova_compute[185194]: 2026-01-31 10:21:11.728 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:11 compute-0 podman[247900]: 2026-01-31 10:21:11.940359548 +0000 UTC m=+0.067375681 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, release=1769056855, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, io.buildah.version=1.33.7)
Jan 31 10:21:12 compute-0 sudo[247992]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdsvltgqnklkhhvtksfczuwyfclaacnx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769854871.5888658-60926-125857470624581/AnsiballZ_command.py'
Jan 31 10:21:12 compute-0 sudo[247992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 10:21:12 compute-0 python3[247994]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 10:21:12 compute-0 nova_compute[185194]: 2026-01-31 10:21:12.292 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:12 compute-0 sudo[247992]: pam_unix(sudo:session): session closed for user root
Jan 31 10:21:13 compute-0 podman[248034]: 2026-01-31 10:21:13.94711643 +0000 UTC m=+0.072699555 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 10:21:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:21:16.450 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:21:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:21:16.451 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:21:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:21:16.453 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:21:16 compute-0 nova_compute[185194]: 2026-01-31 10:21:16.731 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:17 compute-0 nova_compute[185194]: 2026-01-31 10:21:17.295 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:20 compute-0 podman[248055]: 2026-01-31 10:21:20.973614664 +0000 UTC m=+0.085651599 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 10:21:21 compute-0 podman[248054]: 2026-01-31 10:21:21.040088202 +0000 UTC m=+0.149038210 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:21:21 compute-0 nova_compute[185194]: 2026-01-31 10:21:21.734 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:22 compute-0 nova_compute[185194]: 2026-01-31 10:21:22.297 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:24 compute-0 podman[248097]: 2026-01-31 10:21:24.997608147 +0000 UTC m=+0.117828427 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:21:26 compute-0 sudo[248293]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxeocxaozeztuhkowbehyekymcfsrac ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769854885.8530812-61149-75373604136771/AnsiballZ_command.py'
Jan 31 10:21:26 compute-0 sudo[248293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 10:21:26 compute-0 python3[248295]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 10:21:26 compute-0 sudo[248293]: pam_unix(sudo:session): session closed for user root
Jan 31 10:21:26 compute-0 nova_compute[185194]: 2026-01-31 10:21:26.737 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:27 compute-0 nova_compute[185194]: 2026-01-31 10:21:27.301 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:28 compute-0 podman[248334]: 2026-01-31 10:21:28.010697943 +0000 UTC m=+0.117949310 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 31 10:21:28 compute-0 podman[248333]: 2026-01-31 10:21:28.032707175 +0000 UTC m=+0.139353247 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, io.openshift.expose-services=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 31 10:21:29 compute-0 podman[201068]: time="2026-01-31T10:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:21:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:21:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 31 10:21:31 compute-0 openstack_network_exporter[204162]: ERROR   10:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:21:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:21:31 compute-0 openstack_network_exporter[204162]: ERROR   10:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:21:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:21:31 compute-0 nova_compute[185194]: 2026-01-31 10:21:31.742 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:32 compute-0 nova_compute[185194]: 2026-01-31 10:21:32.303 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:36 compute-0 nova_compute[185194]: 2026-01-31 10:21:36.744 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:36 compute-0 podman[248371]: 2026-01-31 10:21:36.944341375 +0000 UTC m=+0.065950315 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:21:37 compute-0 nova_compute[185194]: 2026-01-31 10:21:37.306 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:41 compute-0 nova_compute[185194]: 2026-01-31 10:21:41.747 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:42 compute-0 nova_compute[185194]: 2026-01-31 10:21:42.309 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:42 compute-0 podman[248394]: 2026-01-31 10:21:42.984174764 +0000 UTC m=+0.096490321 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 31 10:21:44 compute-0 podman[248413]: 2026-01-31 10:21:44.757361385 +0000 UTC m=+0.066995042 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:21:46 compute-0 nova_compute[185194]: 2026-01-31 10:21:46.750 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:47 compute-0 nova_compute[185194]: 2026-01-31 10:21:47.311 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:51 compute-0 nova_compute[185194]: 2026-01-31 10:21:51.753 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:51 compute-0 podman[248434]: 2026-01-31 10:21:51.960118757 +0000 UTC m=+0.077211808 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, managed_by=edpm_ansible)
Jan 31 10:21:52 compute-0 podman[248433]: 2026-01-31 10:21:52.011100596 +0000 UTC m=+0.131520210 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 10:21:52 compute-0 nova_compute[185194]: 2026-01-31 10:21:52.315 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:56 compute-0 podman[248477]: 2026-01-31 10:21:56.000808083 +0000 UTC m=+0.106026651 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:21:56 compute-0 nova_compute[185194]: 2026-01-31 10:21:56.755 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:57 compute-0 nova_compute[185194]: 2026-01-31 10:21:57.318 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.033 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.034 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.034 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.926 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.926 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.926 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:21:58 compute-0 nova_compute[185194]: 2026-01-31 10:21:58.927 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:21:58 compute-0 podman[248500]: 2026-01-31 10:21:58.965552404 +0000 UTC m=+0.078122401 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 10:21:58 compute-0 podman[248499]: 2026-01-31 10:21:58.991438173 +0000 UTC m=+0.106683767 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, container_name=kepler, vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:21:59 compute-0 podman[201068]: time="2026-01-31T10:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:21:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:21:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 31 10:22:01 compute-0 openstack_network_exporter[204162]: ERROR   10:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:22:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:22:01 compute-0 openstack_network_exporter[204162]: ERROR   10:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:22:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:22:01 compute-0 nova_compute[185194]: 2026-01-31 10:22:01.763 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.319 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.710 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [{"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.890 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-a6212880-427f-4876-8598-06909416bde1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.890 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.891 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.891 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.891 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.892 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.892 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.892 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:02 compute-0 nova_compute[185194]: 2026-01-31 10:22:02.892 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.632 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.663 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.664 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.664 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.664 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.821 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.886 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.888 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.946 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:04 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.947 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:04.999 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.000 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.053 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.061 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.139 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.139 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.189 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.191 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.275 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.277 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.337 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.695 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.696 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4803MB free_disk=72.34969329833984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.697 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.697 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.765 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.765 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.765 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.766 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.837 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.865 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.867 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:22:05 compute-0 nova_compute[185194]: 2026-01-31 10:22:05.867 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:22:06 compute-0 nova_compute[185194]: 2026-01-31 10:22:06.766 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:06 compute-0 nova_compute[185194]: 2026-01-31 10:22:06.861 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:07 compute-0 nova_compute[185194]: 2026-01-31 10:22:07.321 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:07 compute-0 podman[248561]: 2026-01-31 10:22:07.941718805 +0000 UTC m=+0.055993006 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:22:11 compute-0 nova_compute[185194]: 2026-01-31 10:22:11.769 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:12 compute-0 nova_compute[185194]: 2026-01-31 10:22:12.324 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:13 compute-0 podman[248586]: 2026-01-31 10:22:13.972139198 +0000 UTC m=+0.094052601 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Jan 31 10:22:14 compute-0 podman[248608]: 2026-01-31 10:22:14.979618412 +0000 UTC m=+0.080548932 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 10:22:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:22:16.451 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:22:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:22:16.452 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:22:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:22:16.453 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:22:16 compute-0 nova_compute[185194]: 2026-01-31 10:22:16.773 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:17 compute-0 nova_compute[185194]: 2026-01-31 10:22:17.329 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:21 compute-0 nova_compute[185194]: 2026-01-31 10:22:21.775 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:22 compute-0 nova_compute[185194]: 2026-01-31 10:22:22.331 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:22 compute-0 podman[248629]: 2026-01-31 10:22:22.949029578 +0000 UTC m=+0.073311250 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6)
Jan 31 10:22:22 compute-0 podman[248628]: 2026-01-31 10:22:22.960575158 +0000 UTC m=+0.088754057 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:22:26 compute-0 sshd-session[247284]: Received disconnect from 38.102.83.5 port 44666:11: disconnected by user
Jan 31 10:22:26 compute-0 sshd-session[247284]: Disconnected from user zuul 38.102.83.5 port 44666
Jan 31 10:22:26 compute-0 sshd-session[247281]: pam_unix(sshd:session): session closed for user zuul
Jan 31 10:22:26 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 31 10:22:26 compute-0 systemd[1]: session-30.scope: Consumed 3.811s CPU time.
Jan 31 10:22:26 compute-0 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Jan 31 10:22:26 compute-0 systemd-logind[795]: Removed session 30.
Jan 31 10:22:26 compute-0 nova_compute[185194]: 2026-01-31 10:22:26.778 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:26 compute-0 podman[248670]: 2026-01-31 10:22:26.804536315 +0000 UTC m=+0.065668638 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:22:27 compute-0 nova_compute[185194]: 2026-01-31 10:22:27.333 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:29 compute-0 podman[201068]: time="2026-01-31T10:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:22:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:22:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4387 "" "Go-http-client/1.1"
Jan 31 10:22:29 compute-0 podman[248694]: 2026-01-31 10:22:29.968767353 +0000 UTC m=+0.085563248 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, io.openshift.expose-services=, container_name=kepler, release-0.7.12=)
Jan 31 10:22:29 compute-0 podman[248695]: 2026-01-31 10:22:29.986072197 +0000 UTC m=+0.096771789 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.703 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.704 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.704 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.709 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a6212880-427f-4876-8598-06909416bde1', 'name': 'test_0', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8fe90>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.713 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'name': 'vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe', 'flavor': {'id': '5ace5526-788a-41cf-9e40-e75da8858688', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '8b57d666-88c0-4e62-a76a-0d45801ca1a6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '155389cbed6644acacdbeeb6155adb54', 'user_id': 'd3342a7282114996b6010246d4ade24e', 'hostId': '67a234e1b0525a8a9061af9dbb138e0672b3972e13dee38c5d99701d', 'status': 'active', 'metadata': {'metering.server_group': 'cd99fa32-2992-4cd0-a9a0-648127ea67dc'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.715 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-31T10:22:30.715179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.717 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-31T10:22:30.718067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.723 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.728 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-31T10:22:30.730727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.732 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-31T10:22:30.733507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.770 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.771 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.771 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.806 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.807 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.807 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.809 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-31T10:22:30.810182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.908 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.909 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:30.909 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.003 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.004 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.005 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.007 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-31T10:22:31.007673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.039 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/memory.usage volume: 48.8125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.059 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.060 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.061 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 780107196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-31T10:22:31.060949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.061 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 104782607 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.061 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.latency volume: 88130988 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.062 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 704054408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.062 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 151704385 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.063 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.latency volume: 127968148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.065 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.065 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-31T10:22:31.064935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.067 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-31T10:22:31.067712) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.067 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.068 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.069 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.069 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.070 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.070 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.071 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.072 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.074 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-31T10:22:31.072461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-31T10:22:31.077511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.078 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.078 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.079 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.079 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.080 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.080 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.082 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.083 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.083 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-31T10:22:31.082897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-31T10:22:31.085669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.086 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.086 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-31T10:22:31.088441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.088 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.089 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.091 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.091 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-31T10:22:31.091011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.093 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.094 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/cpu volume: 51200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.094 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/cpu volume: 41440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-31T10:22:31.093784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.096 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.097 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.097 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-31T10:22:31.096794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.098 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.099 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.099 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-31T10:22:31.098973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.100 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.101 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-31T10:22:31.100866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.101 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.101 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.102 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.102 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.102 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.104 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 2406641552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.104 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 9792973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.104 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.105 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 3787034021 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-31T10:22:31.104018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.105 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 12009423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.105 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.107 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.107 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-31T10:22:31.106996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.108 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.109 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.109 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.109 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.109 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.110 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.110 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-31T10:22:31.108923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.112 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-31T10:22:31.112455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.113 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.113 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.113 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.114 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.114 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.115 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.116 14 DEBUG ceilometer.compute.pollsters [-] a6212880-427f-4876-8598-06909416bde1/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-31T10:22:31.116174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.116 14 DEBUG ceilometer.compute.pollsters [-] ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:22:31.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:22:31 compute-0 openstack_network_exporter[204162]: ERROR   10:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:22:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:22:31 compute-0 openstack_network_exporter[204162]: ERROR   10:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:22:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:22:31 compute-0 nova_compute[185194]: 2026-01-31 10:22:31.781 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:32 compute-0 nova_compute[185194]: 2026-01-31 10:22:32.337 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:36 compute-0 nova_compute[185194]: 2026-01-31 10:22:36.783 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:37 compute-0 nova_compute[185194]: 2026-01-31 10:22:37.340 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:38 compute-0 podman[248732]: 2026-01-31 10:22:38.948417225 +0000 UTC m=+0.073930695 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:22:41 compute-0 nova_compute[185194]: 2026-01-31 10:22:41.785 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:42 compute-0 nova_compute[185194]: 2026-01-31 10:22:42.343 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:44 compute-0 podman[248756]: 2026-01-31 10:22:44.751004675 +0000 UTC m=+0.082538252 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Jan 31 10:22:45 compute-0 podman[248778]: 2026-01-31 10:22:45.949859277 +0000 UTC m=+0.071778181 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:22:46 compute-0 nova_compute[185194]: 2026-01-31 10:22:46.788 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:47 compute-0 nova_compute[185194]: 2026-01-31 10:22:47.346 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:51 compute-0 nova_compute[185194]: 2026-01-31 10:22:51.790 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:52 compute-0 nova_compute[185194]: 2026-01-31 10:22:52.348 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:53 compute-0 podman[248799]: 2026-01-31 10:22:53.981314172 +0000 UTC m=+0.098948024 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 10:22:54 compute-0 podman[248798]: 2026-01-31 10:22:54.011576231 +0000 UTC m=+0.132905335 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:22:56 compute-0 nova_compute[185194]: 2026-01-31 10:22:56.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:56 compute-0 nova_compute[185194]: 2026-01-31 10:22:56.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:22:56 compute-0 nova_compute[185194]: 2026-01-31 10:22:56.794 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:56 compute-0 podman[248842]: 2026-01-31 10:22:56.981686126 +0000 UTC m=+0.102759759 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:22:57 compute-0 nova_compute[185194]: 2026-01-31 10:22:57.072 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:22:57 compute-0 nova_compute[185194]: 2026-01-31 10:22:57.072 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:22:57 compute-0 nova_compute[185194]: 2026-01-31 10:22:57.073 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:22:57 compute-0 nova_compute[185194]: 2026-01-31 10:22:57.350 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:22:58 compute-0 nova_compute[185194]: 2026-01-31 10:22:58.095 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [{"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:22:58 compute-0 nova_compute[185194]: 2026-01-31 10:22:58.108 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:22:58 compute-0 nova_compute[185194]: 2026-01-31 10:22:58.109 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:22:58 compute-0 nova_compute[185194]: 2026-01-31 10:22:58.110 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:58 compute-0 nova_compute[185194]: 2026-01-31 10:22:58.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:59 compute-0 nova_compute[185194]: 2026-01-31 10:22:59.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:22:59 compute-0 podman[201068]: time="2026-01-31T10:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:22:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:22:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:23:00 compute-0 podman[248866]: 2026-01-31 10:23:00.986532659 +0000 UTC m=+0.094595324 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:23:01 compute-0 podman[248867]: 2026-01-31 10:23:01.024879631 +0000 UTC m=+0.125517199 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:23:01 compute-0 openstack_network_exporter[204162]: ERROR   10:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:23:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:23:01 compute-0 openstack_network_exporter[204162]: ERROR   10:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:23:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:23:01 compute-0 nova_compute[185194]: 2026-01-31 10:23:01.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:01 compute-0 nova_compute[185194]: 2026-01-31 10:23:01.797 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:02 compute-0 nova_compute[185194]: 2026-01-31 10:23:02.354 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:03 compute-0 nova_compute[185194]: 2026-01-31 10:23:03.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:04 compute-0 nova_compute[185194]: 2026-01-31 10:23:04.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:04 compute-0 nova_compute[185194]: 2026-01-31 10:23:04.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.643 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.644 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.645 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.762 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.846 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.847 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.912 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.915 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.977 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:05 compute-0 nova_compute[185194]: 2026-01-31 10:23:05.978 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.046 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.054 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.116 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.117 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.175 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.176 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.228 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.229 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.295 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.713 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.714 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4810MB free_disk=72.34969329833984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.715 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.715 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.801 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.813 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a6212880-427f-4876-8598-06909416bde1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.814 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.814 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.814 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.879 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.894 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.897 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:23:06 compute-0 nova_compute[185194]: 2026-01-31 10:23:06.898 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:07 compute-0 nova_compute[185194]: 2026-01-31 10:23:07.358 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:09 compute-0 podman[248927]: 2026-01-31 10:23:09.971175242 +0000 UTC m=+0.086283825 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:23:11 compute-0 nova_compute[185194]: 2026-01-31 10:23:11.806 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:12 compute-0 nova_compute[185194]: 2026-01-31 10:23:12.361 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:14 compute-0 podman[248951]: 2026-01-31 10:23:14.969974993 +0000 UTC m=+0.081975238 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, release=1769056855, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.created=2026-01-22T05:09:47Z, vendor=Red Hat, Inc., distribution-scope=public)
Jan 31 10:23:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:16.453 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:16.454 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:16.455 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:16 compute-0 nova_compute[185194]: 2026-01-31 10:23:16.811 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:17 compute-0 podman[248971]: 2026-01-31 10:23:17.003693179 +0000 UTC m=+0.115707523 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 10:23:17 compute-0 nova_compute[185194]: 2026-01-31 10:23:17.365 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:21 compute-0 nova_compute[185194]: 2026-01-31 10:23:21.814 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.367 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.608 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.609 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.609 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.609 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.610 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.611 185198 INFO nova.compute.manager [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Terminating instance
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.612 185198 DEBUG nova.compute.manager [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:23:22 compute-0 kernel: tapde051711-0b (unregistering): left promiscuous mode
Jan 31 10:23:22 compute-0 NetworkManager[56281]: <info>  [1769855002.6516] device (tapde051711-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 10:23:22 compute-0 ovn_controller[97627]: 2026-01-31T10:23:22Z|00058|binding|INFO|Releasing lport de051711-0bd4-4c0b-88e5-77353f5ab169 from this chassis (sb_readonly=0)
Jan 31 10:23:22 compute-0 ovn_controller[97627]: 2026-01-31T10:23:22Z|00059|binding|INFO|Setting lport de051711-0bd4-4c0b-88e5-77353f5ab169 down in Southbound
Jan 31 10:23:22 compute-0 ovn_controller[97627]: 2026-01-31T10:23:22Z|00060|binding|INFO|Removing iface tapde051711-0b ovn-installed in OVS
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.658 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.661 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.666 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:49:95 192.168.0.108'], port_security=['fa:16:3e:11:49:95 192.168.0.108'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-wbazt7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-port-xfrh66srumvj', 'neutron:cidrs': '192.168.0.108/24', 'neutron:device_id': 'ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-wbazt7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-port-xfrh66srumvj', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=de051711-0bd4-4c0b-88e5-77353f5ab169) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.667 106883 INFO neutron.agent.ovn.metadata.agent [-] Port de051711-0bd4-4c0b-88e5-77353f5ab169 in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b unbound from our chassis
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.668 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.669 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.684 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c304f8ee-fd52-46d8-b7af-bee244b3a8d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:22 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 31 10:23:22 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 14.952s CPU time.
Jan 31 10:23:22 compute-0 systemd-machined[156556]: Machine qemu-4-instance-00000004 terminated.
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.709 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0b08df-4264-46f2-9333-0c3f04d7d63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.712 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ee2e70-8393-4ad0-ba50-39c6f359da79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.731 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[b9682633-2041-4bc3-87d8-54cf27460358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.741 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[2b112599-5c97-49ee-a05b-af800659ef9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap95411ff1-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:29:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374461, 'reachable_time': 15561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249002, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.750 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3624219b-e5ea-4d9c-84ee-b02bbf7899fc]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374469, 'tstamp': 374469}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249003, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap95411ff1-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 374472, 'tstamp': 374472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249003, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.752 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.753 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.758 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95411ff1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.758 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.758 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.759 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap95411ff1-60, col_values=(('external_ids', {'iface-id': 'aaa4a6d6-2ec8-4da5-aae6-9a5cfd203c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:22 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:22.759 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.884 185198 INFO nova.virt.libvirt.driver [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance destroyed successfully.
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.886 185198 DEBUG nova.objects.instance [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'resources' on Instance uuid ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.933 185198 DEBUG nova.compute.manager [req-30f618f8-e554-4513-9661-47195476eb68 req-58913201-bbc1-4132-bdac-0b4be24f82aa cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-vif-unplugged-de051711-0bd4-4c0b-88e5-77353f5ab169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.934 185198 DEBUG oslo_concurrency.lockutils [req-30f618f8-e554-4513-9661-47195476eb68 req-58913201-bbc1-4132-bdac-0b4be24f82aa cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.934 185198 DEBUG oslo_concurrency.lockutils [req-30f618f8-e554-4513-9661-47195476eb68 req-58913201-bbc1-4132-bdac-0b4be24f82aa cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.934 185198 DEBUG oslo_concurrency.lockutils [req-30f618f8-e554-4513-9661-47195476eb68 req-58913201-bbc1-4132-bdac-0b4be24f82aa cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.934 185198 DEBUG nova.compute.manager [req-30f618f8-e554-4513-9661-47195476eb68 req-58913201-bbc1-4132-bdac-0b4be24f82aa cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] No waiting events found dispatching network-vif-unplugged-de051711-0bd4-4c0b-88e5-77353f5ab169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.934 185198 DEBUG nova.compute.manager [req-30f618f8-e554-4513-9661-47195476eb68 req-58913201-bbc1-4132-bdac-0b4be24f82aa cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-vif-unplugged-de051711-0bd4-4c0b-88e5-77353f5ab169 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.946 185198 DEBUG nova.virt.libvirt.vif [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T10:10:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-7dj5ncv-nt4mlrq4lqz2-wwpkxjsr2qhm-vnf-ke5iuooxiihe',id=4,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T10:10:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cd99fa32-2992-4cd0-a9a0-648127ea67dc'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-tjsb8u99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T10:10:24Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDEyNDIyNDQ2MTc3Mjg3MzM2OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 31 10:23:22 compute-0 nova_compute[185194]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDEyNDIyNDQ2MTc3Mjg3MzM2OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAxMjQyMjQ0NjE3NzI4NzMzNjg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMTI0MjI0NDYxNzcyODczMzY4PT0tLQo=',user_id='d3342a7282114996b6010246d4ade24e',uuid=ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.946 185198 DEBUG nova.network.os_vif_util [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "de051711-0bd4-4c0b-88e5-77353f5ab169", "address": "fa:16:3e:11:49:95", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.108", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapde051711-0b", "ovs_interfaceid": "de051711-0bd4-4c0b-88e5-77353f5ab169", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.947 185198 DEBUG nova.network.os_vif_util [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.948 185198 DEBUG os_vif [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.950 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.951 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde051711-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.953 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 rsyslogd[235457]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 10:23:22.946 185198 DEBUG nova.virt.libvirt.vif [None req-34d61d12-be [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.956 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.959 185198 INFO os_vif [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:49:95,bridge_name='br-int',has_traffic_filtering=True,id=de051711-0bd4-4c0b-88e5-77353f5ab169,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapde051711-0b')
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.960 185198 INFO nova.virt.libvirt.driver [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Deleting instance files /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5_del
Jan 31 10:23:22 compute-0 nova_compute[185194]: 2026-01-31 10:23:22.960 185198 INFO nova.virt.libvirt.driver [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Deletion of /var/lib/nova/instances/ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5_del complete
Jan 31 10:23:23 compute-0 nova_compute[185194]: 2026-01-31 10:23:23.051 185198 INFO nova.compute.manager [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Took 0.44 seconds to destroy the instance on the hypervisor.
Jan 31 10:23:23 compute-0 nova_compute[185194]: 2026-01-31 10:23:23.052 185198 DEBUG oslo.service.loopingcall [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:23:23 compute-0 nova_compute[185194]: 2026-01-31 10:23:23.052 185198 DEBUG nova.compute.manager [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:23:23 compute-0 nova_compute[185194]: 2026-01-31 10:23:23.053 185198 DEBUG nova.network.neutron [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:23:23 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:23.066 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:23:23 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:23.069 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:23:23 compute-0 nova_compute[185194]: 2026-01-31 10:23:23.069 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.733 185198 DEBUG nova.network.neutron [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.751 185198 INFO nova.compute.manager [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Took 1.70 seconds to deallocate network for instance.
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.795 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.796 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:24 compute-0 podman[249027]: 2026-01-31 10:23:24.847437582 +0000 UTC m=+0.082898011 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.880 185198 DEBUG nova.compute.provider_tree [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.892 185198 DEBUG nova.scheduler.client.report [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:23:24 compute-0 podman[249026]: 2026-01-31 10:23:24.894981734 +0000 UTC m=+0.134555776 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.910 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.944 185198 INFO nova.scheduler.client.report [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Deleted allocations for instance ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5
Jan 31 10:23:24 compute-0 nova_compute[185194]: 2026-01-31 10:23:24.995 185198 DEBUG oslo_concurrency.lockutils [None req-34d61d12-bef3-4a0c-a587-f42c2acf5b36 d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.011 185198 DEBUG nova.compute.manager [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.012 185198 DEBUG oslo_concurrency.lockutils [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.012 185198 DEBUG oslo_concurrency.lockutils [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.012 185198 DEBUG oslo_concurrency.lockutils [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.012 185198 DEBUG nova.compute.manager [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] No waiting events found dispatching network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.012 185198 WARNING nova.compute.manager [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received unexpected event network-vif-plugged-de051711-0bd4-4c0b-88e5-77353f5ab169 for instance with vm_state deleted and task_state None.
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.013 185198 DEBUG nova.compute.manager [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Received event network-changed-de051711-0bd4-4c0b-88e5-77353f5ab169 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.013 185198 DEBUG nova.compute.manager [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Refreshing instance network info cache due to event network-changed-de051711-0bd4-4c0b-88e5-77353f5ab169. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.013 185198 DEBUG oslo_concurrency.lockutils [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.013 185198 DEBUG oslo_concurrency.lockutils [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.014 185198 DEBUG nova.network.neutron [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Refreshing network info cache for port de051711-0bd4-4c0b-88e5-77353f5ab169 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.122 185198 DEBUG nova.network.neutron [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.811 185198 DEBUG nova.network.neutron [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 31 10:23:25 compute-0 nova_compute[185194]: 2026-01-31 10:23:25.812 185198 DEBUG oslo_concurrency.lockutils [req-acb13249-79e7-4536-85d7-c1ae30f177a4 req-1271a44b-a607-40e9-90f7-fce4f14b7ba8 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:23:27 compute-0 nova_compute[185194]: 2026-01-31 10:23:27.371 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:27 compute-0 nova_compute[185194]: 2026-01-31 10:23:27.953 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:27 compute-0 podman[249071]: 2026-01-31 10:23:27.955957431 +0000 UTC m=+0.076892340 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:23:28 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:28.074 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:29 compute-0 podman[201068]: time="2026-01-31T10:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:23:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:23:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 31 10:23:31 compute-0 openstack_network_exporter[204162]: ERROR   10:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:23:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:23:31 compute-0 openstack_network_exporter[204162]: ERROR   10:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:23:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:23:31 compute-0 podman[249095]: 2026-01-31 10:23:31.964653679 +0000 UTC m=+0.082657855 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 31 10:23:31 compute-0 podman[249096]: 2026-01-31 10:23:31.974473875 +0000 UTC m=+0.085577148 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 31 10:23:32 compute-0 nova_compute[185194]: 2026-01-31 10:23:32.373 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:32 compute-0 nova_compute[185194]: 2026-01-31 10:23:32.956 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:37 compute-0 nova_compute[185194]: 2026-01-31 10:23:37.377 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:37 compute-0 nova_compute[185194]: 2026-01-31 10:23:37.881 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769855002.8797364, ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:23:37 compute-0 nova_compute[185194]: 2026-01-31 10:23:37.882 185198 INFO nova.compute.manager [-] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] VM Stopped (Lifecycle Event)
Jan 31 10:23:37 compute-0 nova_compute[185194]: 2026-01-31 10:23:37.906 185198 DEBUG nova.compute.manager [None req-0ae8932f-f8cb-4095-ac22-b15d4068f601 - - - - - -] [instance: ff2e6823-74d9-404c-aa15-5f5eb8e5d1c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:23:37 compute-0 nova_compute[185194]: 2026-01-31 10:23:37.959 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:40 compute-0 podman[249134]: 2026-01-31 10:23:40.951757831 +0000 UTC m=+0.076383887 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.381 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.478 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.479 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.479 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.479 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.480 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.482 185198 INFO nova.compute.manager [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Terminating instance
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.484 185198 DEBUG nova.compute.manager [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:23:42 compute-0 kernel: tapfea1caad-77 (unregistering): left promiscuous mode
Jan 31 10:23:42 compute-0 NetworkManager[56281]: <info>  [1769855022.5308] device (tapfea1caad-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 10:23:42 compute-0 ovn_controller[97627]: 2026-01-31T10:23:42Z|00061|binding|INFO|Releasing lport fea1caad-7786-4490-a707-f79cc6ff5fef from this chassis (sb_readonly=0)
Jan 31 10:23:42 compute-0 ovn_controller[97627]: 2026-01-31T10:23:42Z|00062|binding|INFO|Setting lport fea1caad-7786-4490-a707-f79cc6ff5fef down in Southbound
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.548 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 ovn_controller[97627]: 2026-01-31T10:23:42Z|00063|binding|INFO|Removing iface tapfea1caad-77 ovn-installed in OVS
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.554 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.563 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:7b:f6 192.168.0.202'], port_security=['fa:16:3e:e0:7b:f6 192.168.0.202'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.202/24', 'neutron:device_id': 'a6212880-427f-4876-8598-06909416bde1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '155389cbed6644acacdbeeb6155adb54', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd620723-38fc-4734-9652-06b1394d185d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9bc02a29-e5f2-4030-b81f-c24def52e630, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=fea1caad-7786-4490-a707-f79cc6ff5fef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.566 106883 INFO neutron.agent.ovn.metadata.agent [-] Port fea1caad-7786-4490-a707-f79cc6ff5fef in datapath 95411ff1-6cab-4c5b-9ab6-3779c480de3b unbound from our chassis
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.568 106883 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 95411ff1-6cab-4c5b-9ab6-3779c480de3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.569 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.570 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[009f4866-aac8-45ca-878d-7e2f3d4e9c4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.571 106883 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b namespace which is not needed anymore
Jan 31 10:23:42 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 31 10:23:42 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 49.497s CPU time.
Jan 31 10:23:42 compute-0 systemd-machined[156556]: Machine qemu-1-instance-00000001 terminated.
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.774 185198 DEBUG nova.compute.manager [req-f9ec0f52-07a7-4d61-8c28-86c95de9922e req-ec48456a-ffe2-4bce-ab7a-97997c6d4933 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-vif-unplugged-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.775 185198 DEBUG oslo_concurrency.lockutils [req-f9ec0f52-07a7-4d61-8c28-86c95de9922e req-ec48456a-ffe2-4bce-ab7a-97997c6d4933 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.776 185198 DEBUG oslo_concurrency.lockutils [req-f9ec0f52-07a7-4d61-8c28-86c95de9922e req-ec48456a-ffe2-4bce-ab7a-97997c6d4933 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.776 185198 DEBUG oslo_concurrency.lockutils [req-f9ec0f52-07a7-4d61-8c28-86c95de9922e req-ec48456a-ffe2-4bce-ab7a-97997c6d4933 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.776 185198 DEBUG nova.compute.manager [req-f9ec0f52-07a7-4d61-8c28-86c95de9922e req-ec48456a-ffe2-4bce-ab7a-97997c6d4933 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] No waiting events found dispatching network-vif-unplugged-fea1caad-7786-4490-a707-f79cc6ff5fef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.776 185198 DEBUG nova.compute.manager [req-f9ec0f52-07a7-4d61-8c28-86c95de9922e req-ec48456a-ffe2-4bce-ab7a-97997c6d4933 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-vif-unplugged-fea1caad-7786-4490-a707-f79cc6ff5fef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.783 185198 INFO nova.virt.libvirt.driver [-] [instance: a6212880-427f-4876-8598-06909416bde1] Instance destroyed successfully.
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.783 185198 DEBUG nova.objects.instance [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lazy-loading 'resources' on Instance uuid a6212880-427f-4876-8598-06909416bde1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:23:42 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [NOTICE]   (238454) : haproxy version is 2.8.14-c23fe91
Jan 31 10:23:42 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [NOTICE]   (238454) : path to executable is /usr/sbin/haproxy
Jan 31 10:23:42 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [WARNING]  (238454) : Exiting Master process...
Jan 31 10:23:42 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [WARNING]  (238454) : Exiting Master process...
Jan 31 10:23:42 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [ALERT]    (238454) : Current worker (238456) exited with code 143 (Terminated)
Jan 31 10:23:42 compute-0 neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b[238448]: [WARNING]  (238454) : All workers exited. Exiting... (0)
Jan 31 10:23:42 compute-0 systemd[1]: libpod-93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f.scope: Deactivated successfully.
Jan 31 10:23:42 compute-0 podman[249184]: 2026-01-31 10:23:42.800956649 +0000 UTC m=+0.083890605 container died 93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.802 185198 DEBUG nova.virt.libvirt.vif [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:58:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='155389cbed6644acacdbeeb6155adb54',ramdisk_id='',reservation_id='r-ec42yxyw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='8b57d666-88c0-4e62-a76a-0d45801ca1a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:58:40Z,user_data=None,user_id='d3342a7282114996b6010246d4ade24e',uuid=a6212880-427f-4876-8598-06909416bde1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.802 185198 DEBUG nova.network.os_vif_util [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converting VIF {"id": "fea1caad-7786-4490-a707-f79cc6ff5fef", "address": "fa:16:3e:e0:7b:f6", "network": {"id": "95411ff1-6cab-4c5b-9ab6-3779c480de3b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.202", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "155389cbed6644acacdbeeb6155adb54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea1caad-77", "ovs_interfaceid": "fea1caad-7786-4490-a707-f79cc6ff5fef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.803 185198 DEBUG nova.network.os_vif_util [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.803 185198 DEBUG os_vif [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.804 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.805 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfea1caad-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.806 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.809 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.812 185198 INFO os_vif [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:7b:f6,bridge_name='br-int',has_traffic_filtering=True,id=fea1caad-7786-4490-a707-f79cc6ff5fef,network=Network(95411ff1-6cab-4c5b-9ab6-3779c480de3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea1caad-77')
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.813 185198 INFO nova.virt.libvirt.driver [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Deleting instance files /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1_del
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.813 185198 INFO nova.virt.libvirt.driver [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Deletion of /var/lib/nova/instances/a6212880-427f-4876-8598-06909416bde1_del complete
Jan 31 10:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f-userdata-shm.mount: Deactivated successfully.
Jan 31 10:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd3c331042a0a955f3ab4288a226a884f97cb5fe6d67f0c0dcdd13159df7bce1-merged.mount: Deactivated successfully.
Jan 31 10:23:42 compute-0 podman[249184]: 2026-01-31 10:23:42.855850746 +0000 UTC m=+0.138784702 container cleanup 93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 10:23:42 compute-0 systemd[1]: libpod-conmon-93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f.scope: Deactivated successfully.
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.891 185198 INFO nova.compute.manager [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Took 0.41 seconds to destroy the instance on the hypervisor.
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.893 185198 DEBUG oslo.service.loopingcall [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.894 185198 DEBUG nova.compute.manager [-] [instance: a6212880-427f-4876-8598-06909416bde1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.894 185198 DEBUG nova.network.neutron [-] [instance: a6212880-427f-4876-8598-06909416bde1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:23:42 compute-0 podman[249232]: 2026-01-31 10:23:42.946617783 +0000 UTC m=+0.055943724 container remove 93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.953 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f75cb2-ac52-49c3-9450-fc64345ca333]: (4, ('Sat Jan 31 10:23:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b (93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f)\n93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f\nSat Jan 31 10:23:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b (93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f)\n93b3e19ba21ec25ad0cd4aa0963c8aa7c5ff043723dfafb1b506c59c03f3af3f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.956 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ffc85a-4a71-4774-9543-e02ac9890841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.958 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95411ff1-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.961 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 kernel: tap95411ff1-60: left promiscuous mode
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.972 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 nova_compute[185194]: 2026-01-31 10:23:42.975 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.975 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f74db8dd-0301-46ca-bd5a-908ae36cbeb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.990 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[37bc266c-0bbe-4b28-a4a7-c0403ac3da7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:42.992 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[139dbb5f-a42c-45d3-9ede-4f4074f6052c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:43.005 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[07d5b810-a991-46a6-81a5-f63693258d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 374451, 'reachable_time': 29017, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249246, 'error': None, 'target': 'ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d95411ff1\x2d6cab\x2d4c5b\x2d9ab6\x2d3779c480de3b.mount: Deactivated successfully.
Jan 31 10:23:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:43.017 107396 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-95411ff1-6cab-4c5b-9ab6-3779c480de3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 10:23:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:23:43.017 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[95d113a6-f515-4185-b667-e108b77aa5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.712 185198 DEBUG nova.network.neutron [-] [instance: a6212880-427f-4876-8598-06909416bde1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.731 185198 INFO nova.compute.manager [-] [instance: a6212880-427f-4876-8598-06909416bde1] Took 0.84 seconds to deallocate network for instance.
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.802 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.803 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.864 185198 DEBUG nova.compute.provider_tree [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.877 185198 DEBUG nova.scheduler.client.report [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.894 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:43 compute-0 nova_compute[185194]: 2026-01-31 10:23:43.931 185198 INFO nova.scheduler.client.report [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Deleted allocations for instance a6212880-427f-4876-8598-06909416bde1
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.001 185198 DEBUG oslo_concurrency.lockutils [None req-5df2fdb0-bcf6-48fe-9913-173e6361cbbb d3342a7282114996b6010246d4ade24e 155389cbed6644acacdbeeb6155adb54 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.846 185198 DEBUG nova.compute.manager [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.847 185198 DEBUG oslo_concurrency.lockutils [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "a6212880-427f-4876-8598-06909416bde1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.847 185198 DEBUG oslo_concurrency.lockutils [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.848 185198 DEBUG oslo_concurrency.lockutils [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a6212880-427f-4876-8598-06909416bde1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.849 185198 DEBUG nova.compute.manager [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] No waiting events found dispatching network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.849 185198 WARNING nova.compute.manager [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received unexpected event network-vif-plugged-fea1caad-7786-4490-a707-f79cc6ff5fef for instance with vm_state deleted and task_state None.
Jan 31 10:23:44 compute-0 nova_compute[185194]: 2026-01-31 10:23:44.850 185198 DEBUG nova.compute.manager [req-f3d0f318-3a9c-4a8a-82bb-3a629d6fda2f req-f5c45719-0373-4830-aa73-9ce6f0d6f1b2 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a6212880-427f-4876-8598-06909416bde1] Received event network-vif-deleted-fea1caad-7786-4490-a707-f79cc6ff5fef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:23:46 compute-0 podman[249248]: 2026-01-31 10:23:46.002304854 +0000 UTC m=+0.118374210 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vcs-type=git, io.openshift.expose-services=, release=1769056855, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 10:23:47 compute-0 nova_compute[185194]: 2026-01-31 10:23:47.384 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:47 compute-0 nova_compute[185194]: 2026-01-31 10:23:47.808 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:48 compute-0 podman[249271]: 2026-01-31 10:23:48.006932501 +0000 UTC m=+0.125749766 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 10:23:52 compute-0 nova_compute[185194]: 2026-01-31 10:23:52.387 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:52 compute-0 nova_compute[185194]: 2026-01-31 10:23:52.811 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:55 compute-0 podman[249291]: 2026-01-31 10:23:55.974325802 +0000 UTC m=+0.084458240 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 10:23:56 compute-0 podman[249290]: 2026-01-31 10:23:56.005608636 +0000 UTC m=+0.121183001 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 10:23:57 compute-0 nova_compute[185194]: 2026-01-31 10:23:57.392 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:57 compute-0 nova_compute[185194]: 2026-01-31 10:23:57.778 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769855022.7752795, a6212880-427f-4876-8598-06909416bde1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:23:57 compute-0 nova_compute[185194]: 2026-01-31 10:23:57.779 185198 INFO nova.compute.manager [-] [instance: a6212880-427f-4876-8598-06909416bde1] VM Stopped (Lifecycle Event)
Jan 31 10:23:57 compute-0 nova_compute[185194]: 2026-01-31 10:23:57.811 185198 DEBUG nova.compute.manager [None req-d53c8398-4514-4bdf-97c5-3b74405a1b85 - - - - - -] [instance: a6212880-427f-4876-8598-06909416bde1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:23:57 compute-0 nova_compute[185194]: 2026-01-31 10:23:57.815 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:23:59 compute-0 podman[249335]: 2026-01-31 10:23:59.00232435 +0000 UTC m=+0.125212461 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:23:59 compute-0 podman[201068]: time="2026-01-31T10:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:23:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:23:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.897 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.898 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.898 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.920 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.921 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.921 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:23:59 compute-0 nova_compute[185194]: 2026-01-31 10:23:59.922 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:01 compute-0 openstack_network_exporter[204162]: ERROR   10:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:24:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:24:01 compute-0 openstack_network_exporter[204162]: ERROR   10:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:24:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:24:02 compute-0 nova_compute[185194]: 2026-01-31 10:24:02.393 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:02 compute-0 nova_compute[185194]: 2026-01-31 10:24:02.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:02 compute-0 nova_compute[185194]: 2026-01-31 10:24:02.819 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:02 compute-0 podman[249356]: 2026-01-31 10:24:02.99311683 +0000 UTC m=+0.109051466 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, config_id=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:24:03 compute-0 podman[249357]: 2026-01-31 10:24:03.004344492 +0000 UTC m=+0.114908013 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:24:03 compute-0 nova_compute[185194]: 2026-01-31 10:24:03.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:05 compute-0 nova_compute[185194]: 2026-01-31 10:24:05.599 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:05 compute-0 nova_compute[185194]: 2026-01-31 10:24:05.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:05 compute-0 nova_compute[185194]: 2026-01-31 10:24:05.622 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:05 compute-0 nova_compute[185194]: 2026-01-31 10:24:05.622 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.395 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.647 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.647 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.648 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.648 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:24:07 compute-0 nova_compute[185194]: 2026-01-31 10:24:07.821 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.089 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.090 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5369MB free_disk=72.3940200805664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.090 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.091 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.296 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.296 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.316 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.337 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.337 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.353 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.373 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.400 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.415 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.437 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:24:08 compute-0 nova_compute[185194]: 2026-01-31 10:24:08.438 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:24:11 compute-0 podman[249397]: 2026-01-31 10:24:11.971345972 +0000 UTC m=+0.093408414 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:24:12 compute-0 nova_compute[185194]: 2026-01-31 10:24:12.399 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:12 compute-0 nova_compute[185194]: 2026-01-31 10:24:12.825 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:13 compute-0 ovn_controller[97627]: 2026-01-31T10:24:13Z|00064|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 31 10:24:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:24:16.454 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:24:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:24:16.455 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:24:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:24:16.455 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:24:17 compute-0 podman[249421]: 2026-01-31 10:24:17.006655512 +0000 UTC m=+0.127997972 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, version=9.7, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:24:17 compute-0 nova_compute[185194]: 2026-01-31 10:24:17.405 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:17 compute-0 nova_compute[185194]: 2026-01-31 10:24:17.828 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:19 compute-0 podman[249442]: 2026-01-31 10:24:19.003126085 +0000 UTC m=+0.121532320 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 10:24:22 compute-0 nova_compute[185194]: 2026-01-31 10:24:22.405 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:22 compute-0 nova_compute[185194]: 2026-01-31 10:24:22.830 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:27 compute-0 podman[249462]: 2026-01-31 10:24:27.025157868 +0000 UTC m=+0.138467048 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126)
Jan 31 10:24:27 compute-0 podman[249461]: 2026-01-31 10:24:27.028044659 +0000 UTC m=+0.144335662 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 10:24:27 compute-0 nova_compute[185194]: 2026-01-31 10:24:27.409 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:27 compute-0 nova_compute[185194]: 2026-01-31 10:24:27.834 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:29 compute-0 podman[201068]: time="2026-01-31T10:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:24:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:24:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Jan 31 10:24:30 compute-0 podman[249504]: 2026-01-31 10:24:30.00202717 +0000 UTC m=+0.125716975 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.704 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.704 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.714 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.717 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.719 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:24:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:24:31 compute-0 openstack_network_exporter[204162]: ERROR   10:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:24:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:24:31 compute-0 openstack_network_exporter[204162]: ERROR   10:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:24:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:24:32 compute-0 nova_compute[185194]: 2026-01-31 10:24:32.411 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:32 compute-0 nova_compute[185194]: 2026-01-31 10:24:32.837 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:33 compute-0 podman[249530]: 2026-01-31 10:24:33.998413215 +0000 UTC m=+0.119117573 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=kepler, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Jan 31 10:24:33 compute-0 podman[249531]: 2026-01-31 10:24:33.998539198 +0000 UTC m=+0.120235381 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:24:37 compute-0 nova_compute[185194]: 2026-01-31 10:24:37.413 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:37 compute-0 nova_compute[185194]: 2026-01-31 10:24:37.840 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:42 compute-0 nova_compute[185194]: 2026-01-31 10:24:42.417 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:42 compute-0 nova_compute[185194]: 2026-01-31 10:24:42.843 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:42 compute-0 podman[249570]: 2026-01-31 10:24:42.969404364 +0000 UTC m=+0.094338185 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:24:47 compute-0 nova_compute[185194]: 2026-01-31 10:24:47.420 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:47 compute-0 nova_compute[185194]: 2026-01-31 10:24:47.846 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:47 compute-0 podman[249593]: 2026-01-31 10:24:47.96609501 +0000 UTC m=+0.086196296 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1769056855, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git)
Jan 31 10:24:48 compute-0 nova_compute[185194]: 2026-01-31 10:24:48.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:48 compute-0 nova_compute[185194]: 2026-01-31 10:24:48.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 10:24:50 compute-0 podman[249615]: 2026-01-31 10:24:50.010908156 +0000 UTC m=+0.123610034 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 10:24:52 compute-0 nova_compute[185194]: 2026-01-31 10:24:52.425 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:52 compute-0 nova_compute[185194]: 2026-01-31 10:24:52.850 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:57 compute-0 nova_compute[185194]: 2026-01-31 10:24:57.427 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:57 compute-0 podman[249633]: 2026-01-31 10:24:57.57712648 +0000 UTC m=+0.120158879 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:24:57 compute-0 podman[249632]: 2026-01-31 10:24:57.645726333 +0000 UTC m=+0.189814978 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 10:24:57 compute-0 nova_compute[185194]: 2026-01-31 10:24:57.853 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:24:59 compute-0 nova_compute[185194]: 2026-01-31 10:24:59.622 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:24:59 compute-0 podman[201068]: time="2026-01-31T10:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:24:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:24:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Jan 31 10:25:00 compute-0 nova_compute[185194]: 2026-01-31 10:25:00.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:00 compute-0 nova_compute[185194]: 2026-01-31 10:25:00.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:25:00 compute-0 nova_compute[185194]: 2026-01-31 10:25:00.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:25:00 compute-0 nova_compute[185194]: 2026-01-31 10:25:00.623 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 10:25:00 compute-0 podman[249677]: 2026-01-31 10:25:00.969980689 +0000 UTC m=+0.087257251 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:25:01 compute-0 openstack_network_exporter[204162]: ERROR   10:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:25:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:25:01 compute-0 openstack_network_exporter[204162]: ERROR   10:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:25:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:25:01 compute-0 nova_compute[185194]: 2026-01-31 10:25:01.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:01 compute-0 nova_compute[185194]: 2026-01-31 10:25:01.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:02 compute-0 nova_compute[185194]: 2026-01-31 10:25:02.429 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:02 compute-0 nova_compute[185194]: 2026-01-31 10:25:02.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:02 compute-0 nova_compute[185194]: 2026-01-31 10:25:02.855 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:03 compute-0 nova_compute[185194]: 2026-01-31 10:25:03.652 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:05 compute-0 podman[249702]: 2026-01-31 10:25:05.004997342 +0000 UTC m=+0.122127147 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 10:25:05 compute-0 podman[249701]: 2026-01-31 10:25:05.023121727 +0000 UTC m=+0.142454316 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 31 10:25:05 compute-0 nova_compute[185194]: 2026-01-31 10:25:05.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:05 compute-0 nova_compute[185194]: 2026-01-31 10:25:05.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:05 compute-0 nova_compute[185194]: 2026-01-31 10:25:05.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:25:06 compute-0 nova_compute[185194]: 2026-01-31 10:25:06.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:07 compute-0 nova_compute[185194]: 2026-01-31 10:25:07.430 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:07 compute-0 nova_compute[185194]: 2026-01-31 10:25:07.857 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:08 compute-0 nova_compute[185194]: 2026-01-31 10:25:08.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:08 compute-0 nova_compute[185194]: 2026-01-31 10:25:08.641 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:25:08 compute-0 nova_compute[185194]: 2026-01-31 10:25:08.641 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:25:08 compute-0 nova_compute[185194]: 2026-01-31 10:25:08.642 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:25:08 compute-0 nova_compute[185194]: 2026-01-31 10:25:08.642 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.147 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.149 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5375MB free_disk=72.39455032348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.150 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.150 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.299 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.300 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.390 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.408 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.411 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:25:09 compute-0 nova_compute[185194]: 2026-01-31 10:25:09.411 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:25:12 compute-0 nova_compute[185194]: 2026-01-31 10:25:12.434 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:12 compute-0 nova_compute[185194]: 2026-01-31 10:25:12.861 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:13 compute-0 podman[249735]: 2026-01-31 10:25:13.977917109 +0000 UTC m=+0.096575271 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:25:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:25:16.455 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:25:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:25:16.456 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:25:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:25:16.456 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:25:17 compute-0 nova_compute[185194]: 2026-01-31 10:25:17.435 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:17 compute-0 nova_compute[185194]: 2026-01-31 10:25:17.864 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:18 compute-0 nova_compute[185194]: 2026-01-31 10:25:18.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:25:18 compute-0 nova_compute[185194]: 2026-01-31 10:25:18.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 10:25:18 compute-0 nova_compute[185194]: 2026-01-31 10:25:18.630 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 10:25:18 compute-0 podman[249758]: 2026-01-31 10:25:18.981283857 +0000 UTC m=+0.102775432 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, build-date=2026-01-22T05:09:47Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, name=ubi9/ubi-minimal, release=1769056855, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 10:25:20 compute-0 podman[249780]: 2026-01-31 10:25:20.980580027 +0000 UTC m=+0.098246981 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:25:22 compute-0 nova_compute[185194]: 2026-01-31 10:25:22.439 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:22 compute-0 nova_compute[185194]: 2026-01-31 10:25:22.867 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:27 compute-0 nova_compute[185194]: 2026-01-31 10:25:27.442 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:27 compute-0 nova_compute[185194]: 2026-01-31 10:25:27.871 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:27 compute-0 podman[249799]: 2026-01-31 10:25:27.991327485 +0000 UTC m=+0.101440740 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 10:25:28 compute-0 podman[249798]: 2026-01-31 10:25:28.018595774 +0000 UTC m=+0.138559491 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 10:25:29 compute-0 podman[201068]: time="2026-01-31T10:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:25:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:25:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Jan 31 10:25:31 compute-0 openstack_network_exporter[204162]: ERROR   10:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:25:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:25:31 compute-0 openstack_network_exporter[204162]: ERROR   10:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:25:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:25:31 compute-0 podman[249841]: 2026-01-31 10:25:31.986081408 +0000 UTC m=+0.104809901 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:25:32 compute-0 nova_compute[185194]: 2026-01-31 10:25:32.443 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:32 compute-0 nova_compute[185194]: 2026-01-31 10:25:32.875 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:35 compute-0 podman[249865]: 2026-01-31 10:25:35.997193633 +0000 UTC m=+0.110705687 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, version=9.4, architecture=x86_64, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible)
Jan 31 10:25:36 compute-0 podman[249866]: 2026-01-31 10:25:36.019856769 +0000 UTC m=+0.129703533 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 10:25:37 compute-0 nova_compute[185194]: 2026-01-31 10:25:37.446 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:37 compute-0 nova_compute[185194]: 2026-01-31 10:25:37.879 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:42 compute-0 nova_compute[185194]: 2026-01-31 10:25:42.448 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:42 compute-0 nova_compute[185194]: 2026-01-31 10:25:42.882 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:45 compute-0 podman[249901]: 2026-01-31 10:25:45.08095652 +0000 UTC m=+0.139538945 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:25:47 compute-0 nova_compute[185194]: 2026-01-31 10:25:47.449 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:47 compute-0 nova_compute[185194]: 2026-01-31 10:25:47.885 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:49 compute-0 podman[249927]: 2026-01-31 10:25:49.970542888 +0000 UTC m=+0.094220232 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.created=2026-01-22T05:09:47Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, release=1769056855, version=9.7, maintainer=Red Hat, Inc.)
Jan 31 10:25:51 compute-0 podman[249947]: 2026-01-31 10:25:51.992720889 +0000 UTC m=+0.109163050 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 10:25:52 compute-0 nova_compute[185194]: 2026-01-31 10:25:52.451 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:52 compute-0 nova_compute[185194]: 2026-01-31 10:25:52.888 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:57 compute-0 nova_compute[185194]: 2026-01-31 10:25:57.456 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:57 compute-0 nova_compute[185194]: 2026-01-31 10:25:57.891 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:25:58 compute-0 podman[249968]: 2026-01-31 10:25:58.995510991 +0000 UTC m=+0.106531524 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126)
Jan 31 10:25:59 compute-0 podman[249967]: 2026-01-31 10:25:59.037330857 +0000 UTC m=+0.158047168 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 10:25:59 compute-0 podman[201068]: time="2026-01-31T10:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:25:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:25:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Jan 31 10:26:00 compute-0 nova_compute[185194]: 2026-01-31 10:26:00.629 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:01 compute-0 openstack_network_exporter[204162]: ERROR   10:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:26:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:26:01 compute-0 openstack_network_exporter[204162]: ERROR   10:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:26:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:26:01 compute-0 nova_compute[185194]: 2026-01-31 10:26:01.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:01 compute-0 nova_compute[185194]: 2026-01-31 10:26:01.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:26:01 compute-0 nova_compute[185194]: 2026-01-31 10:26:01.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:26:01 compute-0 nova_compute[185194]: 2026-01-31 10:26:01.626 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 10:26:01 compute-0 nova_compute[185194]: 2026-01-31 10:26:01.627 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:02 compute-0 nova_compute[185194]: 2026-01-31 10:26:02.460 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:02 compute-0 nova_compute[185194]: 2026-01-31 10:26:02.894 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:02 compute-0 podman[250009]: 2026-01-31 10:26:02.998154499 +0000 UTC m=+0.117031883 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:26:03 compute-0 nova_compute[185194]: 2026-01-31 10:26:03.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:05 compute-0 nova_compute[185194]: 2026-01-31 10:26:05.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:05 compute-0 nova_compute[185194]: 2026-01-31 10:26:05.619 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:05 compute-0 nova_compute[185194]: 2026-01-31 10:26:05.619 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:05 compute-0 nova_compute[185194]: 2026-01-31 10:26:05.619 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:26:06 compute-0 nova_compute[185194]: 2026-01-31 10:26:06.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:06 compute-0 nova_compute[185194]: 2026-01-31 10:26:06.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:06 compute-0 podman[250033]: 2026-01-31 10:26:06.958995171 +0000 UTC m=+0.078486896 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, distribution-scope=public, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30)
Jan 31 10:26:06 compute-0 podman[250034]: 2026-01-31 10:26:06.997453825 +0000 UTC m=+0.110504832 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:26:07 compute-0 nova_compute[185194]: 2026-01-31 10:26:07.466 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:07 compute-0 nova_compute[185194]: 2026-01-31 10:26:07.896 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:10 compute-0 nova_compute[185194]: 2026-01-31 10:26:10.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:26:10 compute-0 nova_compute[185194]: 2026-01-31 10:26:10.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:26:10 compute-0 nova_compute[185194]: 2026-01-31 10:26:10.639 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:26:10 compute-0 nova_compute[185194]: 2026-01-31 10:26:10.640 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:26:10 compute-0 nova_compute[185194]: 2026-01-31 10:26:10.640 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.008 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.010 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5368MB free_disk=72.39455032348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.010 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.011 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.118 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.119 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.168 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.184 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.186 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:26:11 compute-0 nova_compute[185194]: 2026-01-31 10:26:11.187 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:26:12 compute-0 nova_compute[185194]: 2026-01-31 10:26:12.466 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:12 compute-0 nova_compute[185194]: 2026-01-31 10:26:12.899 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:16 compute-0 podman[250072]: 2026-01-31 10:26:16.014086473 +0000 UTC m=+0.131906997 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:26:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:26:16.456 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:26:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:26:16.457 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:26:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:26:16.457 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:26:17 compute-0 nova_compute[185194]: 2026-01-31 10:26:17.471 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:17 compute-0 nova_compute[185194]: 2026-01-31 10:26:17.903 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:20 compute-0 podman[250095]: 2026-01-31 10:26:20.991633528 +0000 UTC m=+0.107134470 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-01-22T05:09:47Z, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, config_id=openstack_network_exporter, build-date=2026-01-22T05:09:47Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1769056855, vcs-type=git, version=9.7, io.buildah.version=1.33.7)
Jan 31 10:26:22 compute-0 nova_compute[185194]: 2026-01-31 10:26:22.475 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:22 compute-0 nova_compute[185194]: 2026-01-31 10:26:22.906 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:22 compute-0 podman[250116]: 2026-01-31 10:26:22.939524627 +0000 UTC m=+0.063353016 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 10:26:27 compute-0 nova_compute[185194]: 2026-01-31 10:26:27.477 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:27 compute-0 nova_compute[185194]: 2026-01-31 10:26:27.909 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:29 compute-0 podman[201068]: time="2026-01-31T10:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:26:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:26:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3918 "" "Go-http-client/1.1"
Jan 31 10:26:29 compute-0 podman[250136]: 2026-01-31 10:26:29.981908159 +0000 UTC m=+0.096805896 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260126, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 10:26:30 compute-0 podman[250135]: 2026-01-31 10:26:30.035835682 +0000 UTC m=+0.161775029 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.704 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.705 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.705 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd94ba2ba10>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:26:30.718 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:26:31 compute-0 openstack_network_exporter[204162]: ERROR   10:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:26:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:26:31 compute-0 openstack_network_exporter[204162]: ERROR   10:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:26:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:26:32 compute-0 nova_compute[185194]: 2026-01-31 10:26:32.482 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:32 compute-0 nova_compute[185194]: 2026-01-31 10:26:32.912 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:33 compute-0 podman[250180]: 2026-01-31 10:26:33.956988111 +0000 UTC m=+0.078912558 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:26:37 compute-0 nova_compute[185194]: 2026-01-31 10:26:37.486 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:37 compute-0 nova_compute[185194]: 2026-01-31 10:26:37.915 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:37 compute-0 podman[250205]: 2026-01-31 10:26:37.967462601 +0000 UTC m=+0.078342964 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, version=9.4, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=)
Jan 31 10:26:37 compute-0 podman[250206]: 2026-01-31 10:26:37.98497813 +0000 UTC m=+0.095911544 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 10:26:42 compute-0 nova_compute[185194]: 2026-01-31 10:26:42.488 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:42 compute-0 nova_compute[185194]: 2026-01-31 10:26:42.918 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:46 compute-0 podman[250242]: 2026-01-31 10:26:46.962434688 +0000 UTC m=+0.076811666 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:26:47 compute-0 nova_compute[185194]: 2026-01-31 10:26:47.493 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:47 compute-0 nova_compute[185194]: 2026-01-31 10:26:47.921 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:52 compute-0 podman[250267]: 2026-01-31 10:26:52.001176965 +0000 UTC m=+0.122330713 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.7, architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 31 10:26:52 compute-0 nova_compute[185194]: 2026-01-31 10:26:52.496 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:52 compute-0 nova_compute[185194]: 2026-01-31 10:26:52.923 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:53 compute-0 podman[250289]: 2026-01-31 10:26:53.977561343 +0000 UTC m=+0.103255936 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 10:26:57 compute-0 nova_compute[185194]: 2026-01-31 10:26:57.499 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:57 compute-0 nova_compute[185194]: 2026-01-31 10:26:57.927 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:26:59 compute-0 podman[201068]: time="2026-01-31T10:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:26:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:26:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 31 10:27:00 compute-0 podman[250309]: 2026-01-31 10:27:00.982429276 +0000 UTC m=+0.098310833 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Jan 31 10:27:01 compute-0 podman[250308]: 2026-01-31 10:27:01.010629577 +0000 UTC m=+0.139503473 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 10:27:01 compute-0 openstack_network_exporter[204162]: ERROR   10:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:27:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:27:01 compute-0 openstack_network_exporter[204162]: ERROR   10:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:27:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.189 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.190 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.190 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.292 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.293 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.293 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.503 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:02 compute-0 nova_compute[185194]: 2026-01-31 10:27:02.930 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:03 compute-0 nova_compute[185194]: 2026-01-31 10:27:03.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:04 compute-0 podman[250355]: 2026-01-31 10:27:04.972708582 +0000 UTC m=+0.091400303 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:27:06 compute-0 nova_compute[185194]: 2026-01-31 10:27:06.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:06 compute-0 nova_compute[185194]: 2026-01-31 10:27:06.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:27:07 compute-0 nova_compute[185194]: 2026-01-31 10:27:07.506 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:07 compute-0 nova_compute[185194]: 2026-01-31 10:27:07.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:07 compute-0 nova_compute[185194]: 2026-01-31 10:27:07.933 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:08 compute-0 nova_compute[185194]: 2026-01-31 10:27:08.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:08 compute-0 nova_compute[185194]: 2026-01-31 10:27:08.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:08 compute-0 podman[250380]: 2026-01-31 10:27:08.991461616 +0000 UTC m=+0.109274452 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 10:27:09 compute-0 podman[250379]: 2026-01-31 10:27:09.023562094 +0000 UTC m=+0.145108481 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, name=ubi9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 31 10:27:11 compute-0 nova_compute[185194]: 2026-01-31 10:27:11.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.427 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.427 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.428 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.428 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.509 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.783 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.784 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5361MB free_disk=72.39455032348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.784 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.784 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.937 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.940 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.941 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.976 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:27:12 compute-0 nova_compute[185194]: 2026-01-31 10:27:12.998 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:27:13 compute-0 nova_compute[185194]: 2026-01-31 10:27:13.001 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:27:13 compute-0 nova_compute[185194]: 2026-01-31 10:27:13.002 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:27:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:27:16.458 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:27:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:27:16.458 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:27:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:27:16.459 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:27:17 compute-0 nova_compute[185194]: 2026-01-31 10:27:17.513 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:17 compute-0 nova_compute[185194]: 2026-01-31 10:27:17.941 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:17 compute-0 podman[250416]: 2026-01-31 10:27:17.974876868 +0000 UTC m=+0.081821399 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:27:22 compute-0 nova_compute[185194]: 2026-01-31 10:27:22.516 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:22 compute-0 nova_compute[185194]: 2026-01-31 10:27:22.943 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:22 compute-0 podman[250441]: 2026-01-31 10:27:22.989713857 +0000 UTC m=+0.106734209 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-01-22T05:09:47Z, build-date=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1769056855, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, architecture=x86_64)
Jan 31 10:27:25 compute-0 podman[250459]: 2026-01-31 10:27:25.005904681 +0000 UTC m=+0.128827492 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 10:27:27 compute-0 nova_compute[185194]: 2026-01-31 10:27:27.519 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:27 compute-0 nova_compute[185194]: 2026-01-31 10:27:27.946 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:29 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:27:29.606 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:27:29 compute-0 nova_compute[185194]: 2026-01-31 10:27:29.607 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:29 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:27:29.608 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:27:29 compute-0 podman[201068]: time="2026-01-31T10:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:27:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:27:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3915 "" "Go-http-client/1.1"
Jan 31 10:27:31 compute-0 openstack_network_exporter[204162]: ERROR   10:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:27:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:27:31 compute-0 openstack_network_exporter[204162]: ERROR   10:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:27:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:27:31 compute-0 podman[250477]: 2026-01-31 10:27:31.963296189 +0000 UTC m=+0.086315110 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260126)
Jan 31 10:27:32 compute-0 podman[250476]: 2026-01-31 10:27:32.018601936 +0000 UTC m=+0.145593354 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:27:32 compute-0 nova_compute[185194]: 2026-01-31 10:27:32.521 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:32 compute-0 nova_compute[185194]: 2026-01-31 10:27:32.949 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:36 compute-0 podman[250520]: 2026-01-31 10:27:36.009564268 +0000 UTC m=+0.122734513 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 31 10:27:37 compute-0 nova_compute[185194]: 2026-01-31 10:27:37.526 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:37 compute-0 nova_compute[185194]: 2026-01-31 10:27:37.952 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:38 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:27:38.611 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:27:39 compute-0 podman[250543]: 2026-01-31 10:27:39.989019868 +0000 UTC m=+0.112057640 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4)
Jan 31 10:27:40 compute-0 podman[250544]: 2026-01-31 10:27:40.010099175 +0000 UTC m=+0.128315429 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 10:27:42 compute-0 nova_compute[185194]: 2026-01-31 10:27:42.528 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:42 compute-0 nova_compute[185194]: 2026-01-31 10:27:42.954 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:47 compute-0 nova_compute[185194]: 2026-01-31 10:27:47.531 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:47 compute-0 nova_compute[185194]: 2026-01-31 10:27:47.957 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:48 compute-0 podman[250581]: 2026-01-31 10:27:48.968938818 +0000 UTC m=+0.085648942 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:27:52 compute-0 nova_compute[185194]: 2026-01-31 10:27:52.536 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:52 compute-0 nova_compute[185194]: 2026-01-31 10:27:52.960 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:53 compute-0 podman[250605]: 2026-01-31 10:27:53.971168171 +0000 UTC m=+0.097975234 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, release=1769056855, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, architecture=x86_64, build-date=2026-01-22T05:09:47Z)
Jan 31 10:27:55 compute-0 podman[250625]: 2026-01-31 10:27:55.986172345 +0000 UTC m=+0.108013821 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 10:27:57 compute-0 nova_compute[185194]: 2026-01-31 10:27:57.542 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:57 compute-0 nova_compute[185194]: 2026-01-31 10:27:57.963 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:27:59 compute-0 ovn_controller[97627]: 2026-01-31T10:27:59Z|00065|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 31 10:27:59 compute-0 podman[201068]: time="2026-01-31T10:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:27:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:27:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3917 "" "Go-http-client/1.1"
Jan 31 10:28:01 compute-0 openstack_network_exporter[204162]: ERROR   10:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:28:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:28:01 compute-0 openstack_network_exporter[204162]: ERROR   10:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:28:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:28:02 compute-0 nova_compute[185194]: 2026-01-31 10:28:02.544 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:02 compute-0 nova_compute[185194]: 2026-01-31 10:28:02.967 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:02 compute-0 podman[250644]: 2026-01-31 10:28:02.988761201 +0000 UTC m=+0.103037029 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Jan 31 10:28:03 compute-0 podman[250643]: 2026-01-31 10:28:03.047683357 +0000 UTC m=+0.160673583 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:28:04 compute-0 nova_compute[185194]: 2026-01-31 10:28:04.004 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:04 compute-0 nova_compute[185194]: 2026-01-31 10:28:04.005 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:28:04 compute-0 nova_compute[185194]: 2026-01-31 10:28:04.005 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:28:04 compute-0 nova_compute[185194]: 2026-01-31 10:28:04.020 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 10:28:04 compute-0 nova_compute[185194]: 2026-01-31 10:28:04.020 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:04 compute-0 nova_compute[185194]: 2026-01-31 10:28:04.021 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:05 compute-0 nova_compute[185194]: 2026-01-31 10:28:05.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:05 compute-0 nova_compute[185194]: 2026-01-31 10:28:05.624 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:06 compute-0 podman[250688]: 2026-01-31 10:28:06.952907915 +0000 UTC m=+0.075020441 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:28:07 compute-0 nova_compute[185194]: 2026-01-31 10:28:07.548 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:07 compute-0 nova_compute[185194]: 2026-01-31 10:28:07.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:07 compute-0 nova_compute[185194]: 2026-01-31 10:28:07.605 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:28:07 compute-0 nova_compute[185194]: 2026-01-31 10:28:07.971 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:08 compute-0 nova_compute[185194]: 2026-01-31 10:28:08.600 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:08 compute-0 nova_compute[185194]: 2026-01-31 10:28:08.603 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:09 compute-0 nova_compute[185194]: 2026-01-31 10:28:09.716 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:10 compute-0 nova_compute[185194]: 2026-01-31 10:28:10.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:10 compute-0 podman[250714]: 2026-01-31 10:28:10.979204743 +0000 UTC m=+0.084942135 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:28:11 compute-0 podman[250713]: 2026-01-31 10:28:11.007165869 +0000 UTC m=+0.122850365 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, config_id=kepler, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9)
Jan 31 10:28:11 compute-0 nova_compute[185194]: 2026-01-31 10:28:11.848 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:12 compute-0 nova_compute[185194]: 2026-01-31 10:28:12.552 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:12 compute-0 nova_compute[185194]: 2026-01-31 10:28:12.975 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:13 compute-0 nova_compute[185194]: 2026-01-31 10:28:13.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:28:13 compute-0 nova_compute[185194]: 2026-01-31 10:28:13.660 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:13 compute-0 nova_compute[185194]: 2026-01-31 10:28:13.661 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:13 compute-0 nova_compute[185194]: 2026-01-31 10:28:13.661 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:13 compute-0 nova_compute[185194]: 2026-01-31 10:28:13.662 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.097 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.099 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5366MB free_disk=72.39455032348633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.099 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.099 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.250 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.345 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.346 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.610 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.662 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.666 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:28:14 compute-0 nova_compute[185194]: 2026-01-31 10:28:14.667 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:16.458 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:16.459 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:16.459 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:16 compute-0 nova_compute[185194]: 2026-01-31 10:28:16.521 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:17 compute-0 nova_compute[185194]: 2026-01-31 10:28:17.257 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:17 compute-0 nova_compute[185194]: 2026-01-31 10:28:17.554 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:17 compute-0 nova_compute[185194]: 2026-01-31 10:28:17.979 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:19 compute-0 nova_compute[185194]: 2026-01-31 10:28:19.735 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:19 compute-0 podman[250754]: 2026-01-31 10:28:19.980642879 +0000 UTC m=+0.093626338 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 31 10:28:21 compute-0 nova_compute[185194]: 2026-01-31 10:28:21.168 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:22 compute-0 nova_compute[185194]: 2026-01-31 10:28:22.419 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:22 compute-0 nova_compute[185194]: 2026-01-31 10:28:22.557 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:22 compute-0 nova_compute[185194]: 2026-01-31 10:28:22.981 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:24 compute-0 podman[250778]: 2026-01-31 10:28:24.997833046 +0000 UTC m=+0.116776986 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, build-date=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, release=1769056855, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-01-22T05:09:47Z, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Jan 31 10:28:25 compute-0 nova_compute[185194]: 2026-01-31 10:28:25.080 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:25 compute-0 nova_compute[185194]: 2026-01-31 10:28:25.862 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:27 compute-0 podman[250797]: 2026-01-31 10:28:27.003503413 +0000 UTC m=+0.113164958 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 10:28:27 compute-0 nova_compute[185194]: 2026-01-31 10:28:27.559 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:27 compute-0 nova_compute[185194]: 2026-01-31 10:28:27.984 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:29 compute-0 podman[201068]: time="2026-01-31T10:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:28:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27469 "" "Go-http-client/1.1"
Jan 31 10:28:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3920 "" "Go-http-client/1.1"
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.705 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.706 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.706 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.710 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.711 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.712 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.712 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.713 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.713 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.714 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.714 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.715 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.716 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.716 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.717 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.718 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd9483423c0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.outgoing.packets.error': [], 'disk.root.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'memory.usage': [], 'disk.device.read.latency': [], 'network.incoming.bytes': [], 'disk.device.read.requests': [], 'power.state': [], 'disk.device.write.bytes': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.outgoing.packets': [], 'network.incoming.bytes.delta': [], 'cpu': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.719 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.720 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.721 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.722 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:28:30.723 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:28:31 compute-0 openstack_network_exporter[204162]: ERROR   10:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:28:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:28:31 compute-0 openstack_network_exporter[204162]: ERROR   10:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:28:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.563 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.712 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.712 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.767 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.934 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.935 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.946 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.947 185198 INFO nova.compute.claims [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:28:32 compute-0 nova_compute[185194]: 2026-01-31 10:28:32.987 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.114 185198 DEBUG nova.compute.provider_tree [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.136 185198 DEBUG nova.scheduler.client.report [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.199 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.200 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.250 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.251 185198 DEBUG nova.network.neutron [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.286 185198 INFO nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.335 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.471 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.472 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.473 185198 INFO nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Creating image(s)
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.474 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "/var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.474 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "/var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.475 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "/var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.476 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "182dd0a237ed06a0f4beb35bec448249e2991750" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.476 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.687 185198 DEBUG nova.policy [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7569d7689502423193b6d841d1f880c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '146dcb55f281466fa2f94bac5029431f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 10:28:33 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:33.715 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:28:33 compute-0 nova_compute[185194]: 2026-01-31 10:28:33.716 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:33 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:33.717 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:28:33 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:33.717 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:33 compute-0 podman[250818]: 2026-01-31 10:28:33.977360266 +0000 UTC m=+0.093064784 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 31 10:28:34 compute-0 podman[250817]: 2026-01-31 10:28:34.0085123 +0000 UTC m=+0.127037857 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.052 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.052 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.139 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.261 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.262 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.274 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.275 185198 INFO nova.compute.claims [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.576 185198 DEBUG nova.compute.provider_tree [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.619 185198 DEBUG nova.scheduler.client.report [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.644 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.645 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.706 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.707 185198 DEBUG nova.network.neutron [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.738 185198 INFO nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.773 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.894 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.896 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.896 185198 INFO nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Creating image(s)
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.897 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "/var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.898 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "/var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.899 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "/var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:34 compute-0 nova_compute[185194]: 2026-01-31 10:28:34.899 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "182dd0a237ed06a0f4beb35bec448249e2991750" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.128 185198 DEBUG nova.policy [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2205cfa3b87343af99106de256070375', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '904e48d9dedd4c41a51e9b18681b22c2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.248 185198 DEBUG nova.network.neutron [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Successfully created port: 511d4760-4285-458e-a3eb-d3db966dc54c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.651 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.737 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.part --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.739 185198 DEBUG nova.virt.images [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] 5f1c614c-1ba8-4e34-915f-7078c46805eb was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.740 185198 DEBUG nova.privsep.utils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 10:28:35 compute-0 nova_compute[185194]: 2026-01-31 10:28:35.741 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.part /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.025 185198 DEBUG nova.network.neutron [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Successfully updated port: 511d4760-4285-458e-a3eb-d3db966dc54c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.043 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "refresh_cache-03e83c48-a567-468c-84c7-335a02ea7439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.044 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquired lock "refresh_cache-03e83c48-a567-468c-84c7-335a02ea7439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.044 185198 DEBUG nova.network.neutron [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.047 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.part /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.converted" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.052 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.127 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.129 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.157 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.158 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.178 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.193 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.259 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.260 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "182dd0a237ed06a0f4beb35bec448249e2991750" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.261 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.277 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.295 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.296 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "182dd0a237ed06a0f4beb35bec448249e2991750" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.356 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.357 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.398 185198 DEBUG nova.network.neutron [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Successfully created port: b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.402 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.404 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.405 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.425 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.440 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.456 185198 DEBUG nova.network.neutron [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.493 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.494 185198 DEBUG nova.virt.disk.api [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Checking if we can resize image /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.494 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.512 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.513 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.559 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.560 185198 DEBUG nova.virt.disk.api [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Cannot resize image /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.561 185198 DEBUG nova.objects.instance [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lazy-loading 'migration_context' on Instance uuid 03e83c48-a567-468c-84c7-335a02ea7439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.562 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.563 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.564 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.588 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.589 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Ensure instance console log exists: /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.589 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.589 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.590 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.597 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.598 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.616 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.616 185198 DEBUG nova.virt.disk.api [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Checking if we can resize image /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.617 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.641 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.680 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.681 185198 DEBUG nova.virt.disk.api [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Cannot resize image /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.681 185198 DEBUG nova.objects.instance [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c7f9a83-17b3-4e0f-8936-9e6a19920064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.704 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.705 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Ensure instance console log exists: /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.705 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.706 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.706 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.784 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.784 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.799 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.800 185198 INFO nova.compute.claims [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:28:36 compute-0 nova_compute[185194]: 2026-01-31 10:28:36.989 185198 DEBUG nova.compute.provider_tree [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.006 185198 DEBUG nova.scheduler.client.report [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.037 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.038 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.074 185198 DEBUG nova.compute.manager [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-changed-511d4760-4285-458e-a3eb-d3db966dc54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.075 185198 DEBUG nova.compute.manager [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Refreshing instance network info cache due to event network-changed-511d4760-4285-458e-a3eb-d3db966dc54c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.075 185198 DEBUG oslo_concurrency.lockutils [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-03e83c48-a567-468c-84c7-335a02ea7439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.145 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.145 185198 DEBUG nova.network.neutron [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.172 185198 INFO nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.197 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.310 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.314 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.315 185198 INFO nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Creating image(s)
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.316 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "/var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.316 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "/var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.318 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "/var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.343 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.419 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.421 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "182dd0a237ed06a0f4beb35bec448249e2991750" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.421 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.446 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.489 185198 DEBUG nova.policy [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8cec4e2f60d242508ce87cc6af1eea13', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e7c8ba4707564805be9a18429ec92962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.516 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.517 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.547 185198 DEBUG nova.network.neutron [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Updating instance_info_cache with network_info: [{"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.556 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.556 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.557 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.569 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.573 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Releasing lock "refresh_cache-03e83c48-a567-468c-84c7-335a02ea7439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.574 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Instance network_info: |[{"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.574 185198 DEBUG oslo_concurrency.lockutils [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-03e83c48-a567-468c-84c7-335a02ea7439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.574 185198 DEBUG nova.network.neutron [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Refreshing network info cache for port 511d4760-4285-458e-a3eb-d3db966dc54c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.578 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Start _get_guest_xml network_info=[{"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '5f1c614c-1ba8-4e34-915f-7078c46805eb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.586 185198 WARNING nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.593 185198 DEBUG nova.virt.libvirt.host [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.594 185198 DEBUG nova.virt.libvirt.host [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.600 185198 DEBUG nova.virt.libvirt.host [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.601 185198 DEBUG nova.virt.libvirt.host [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.601 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.602 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T10:27:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f3dcbeb5-bd7a-436b-a0c1-9d20fb387210',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.602 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.602 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.603 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.603 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.603 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.603 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.604 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.604 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.604 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.604 185198 DEBUG nova.virt.hardware [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.608 185198 DEBUG nova.virt.libvirt.vif [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1272167923',display_name='tempest-ServerAddressesTestJSON-server-1272167923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1272167923',id=6,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='146dcb55f281466fa2f94bac5029431f',ramdisk_id='',reservation_id='r-jvummq5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1224911199',owner_user_name='tempest-ServerAddressesTestJSON-1224911199-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:28:33Z,user_data=None,user_id='7569d7689502423193b6d841d1f880c0',uuid=03e83c48-a567-468c-84c7-335a02ea7439,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.609 185198 DEBUG nova.network.os_vif_util [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Converting VIF {"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.609 185198 DEBUG nova.network.os_vif_util [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.610 185198 DEBUG nova.objects.instance [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lazy-loading 'pci_devices' on Instance uuid 03e83c48-a567-468c-84c7-335a02ea7439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.615 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.616 185198 DEBUG nova.virt.disk.api [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Checking if we can resize image /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.616 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.630 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <uuid>03e83c48-a567-468c-84c7-335a02ea7439</uuid>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <name>instance-00000006</name>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <memory>131072</memory>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:name>tempest-ServerAddressesTestJSON-server-1272167923</nova:name>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:28:37</nova:creationTime>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:flavor name="m1.nano">
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:memory>128</nova:memory>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:user uuid="7569d7689502423193b6d841d1f880c0">tempest-ServerAddressesTestJSON-1224911199-project-member</nova:user>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:project uuid="146dcb55f281466fa2f94bac5029431f">tempest-ServerAddressesTestJSON-1224911199</nova:project>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="5f1c614c-1ba8-4e34-915f-7078c46805eb"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         <nova:port uuid="511d4760-4285-458e-a3eb-d3db966dc54c">
Jan 31 10:28:37 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <system>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <entry name="serial">03e83c48-a567-468c-84c7-335a02ea7439</entry>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <entry name="uuid">03e83c48-a567-468c-84c7-335a02ea7439</entry>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </system>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <os>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </os>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <features>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </features>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.config"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:28:bb:c9"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <target dev="tap511d4760-42"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </interface>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/console.log" append="off"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <video>
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </video>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:28:37 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:28:37 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:28:37 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:28:37 compute-0 nova_compute[185194]: </domain>
Jan 31 10:28:37 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.631 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Preparing to wait for external event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.631 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.632 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.632 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.633 185198 DEBUG nova.virt.libvirt.vif [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1272167923',display_name='tempest-ServerAddressesTestJSON-server-1272167923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1272167923',id=6,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='146dcb55f281466fa2f94bac5029431f',ramdisk_id='',reservation_id='r-jvummq5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1224911199',owner_user_name='tempest-ServerAddressesTestJSON-1224911199-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:28:33Z,user_data=None,user_id='7569d7689502423193b6d841d1f880c0',uuid=03e83c48-a567-468c-84c7-335a02ea7439,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.633 185198 DEBUG nova.network.os_vif_util [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Converting VIF {"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.634 185198 DEBUG nova.network.os_vif_util [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.634 185198 DEBUG os_vif [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.635 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.636 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.636 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.640 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.640 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap511d4760-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.641 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap511d4760-42, col_values=(('external_ids', {'iface-id': '511d4760-4285-458e-a3eb-d3db966dc54c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:bb:c9', 'vm-uuid': '03e83c48-a567-468c-84c7-335a02ea7439'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:37 compute-0 NetworkManager[56281]: <info>  [1769855317.6450] manager: (tap511d4760-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.645 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.648 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.655 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.657 185198 INFO os_vif [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42')
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.668 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.669 185198 DEBUG nova.virt.disk.api [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Cannot resize image /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.669 185198 DEBUG nova.objects.instance [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a307f4f-f019-4927-9da3-50e4a4aec6f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.686 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.687 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Ensure instance console log exists: /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.688 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.688 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.688 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.732 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.733 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.733 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] No VIF found with MAC fa:16:3e:28:bb:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.734 185198 INFO nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Using config drive
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.937 185198 DEBUG nova.network.neutron [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Successfully updated port: b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.960 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.960 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:37 compute-0 nova_compute[185194]: 2026-01-31 10:28:37.960 185198 DEBUG nova.network.neutron [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:28:38 compute-0 podman[250920]: 2026-01-31 10:28:38.006127155 +0000 UTC m=+0.113659840 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:28:38 compute-0 nova_compute[185194]: 2026-01-31 10:28:38.536 185198 DEBUG nova.network.neutron [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:28:38 compute-0 nova_compute[185194]: 2026-01-31 10:28:38.958 185198 INFO nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Creating config drive at /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.config
Jan 31 10:28:38 compute-0 nova_compute[185194]: 2026-01-31 10:28:38.965 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyzjs8g6v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.087 185198 DEBUG oslo_concurrency.processutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyzjs8g6v" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:39 compute-0 kernel: tap511d4760-42: entered promiscuous mode
Jan 31 10:28:39 compute-0 NetworkManager[56281]: <info>  [1769855319.1598] manager: (tap511d4760-42): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 31 10:28:39 compute-0 ovn_controller[97627]: 2026-01-31T10:28:39Z|00066|binding|INFO|Claiming lport 511d4760-4285-458e-a3eb-d3db966dc54c for this chassis.
Jan 31 10:28:39 compute-0 ovn_controller[97627]: 2026-01-31T10:28:39Z|00067|binding|INFO|511d4760-4285-458e-a3eb-d3db966dc54c: Claiming fa:16:3e:28:bb:c9 10.100.0.10
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.158 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.171 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:bb:c9 10.100.0.10'], port_security=['fa:16:3e:28:bb:c9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '03e83c48-a567-468c-84c7-335a02ea7439', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146dcb55f281466fa2f94bac5029431f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1902f3bc-e606-4f80-95f7-06e56cabbcd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e79fc86-bb13-4bb1-9f7b-4e8c9020ea15, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=511d4760-4285-458e-a3eb-d3db966dc54c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.174 106883 INFO neutron.agent.ovn.metadata.agent [-] Port 511d4760-4285-458e-a3eb-d3db966dc54c in datapath cf9c2056-599e-4c77-ba51-3ee29be3fa70 bound to our chassis
Jan 31 10:28:39 compute-0 ovn_controller[97627]: 2026-01-31T10:28:39Z|00068|binding|INFO|Setting lport 511d4760-4285-458e-a3eb-d3db966dc54c ovn-installed in OVS
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.179 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf9c2056-599e-4c77-ba51-3ee29be3fa70
Jan 31 10:28:39 compute-0 ovn_controller[97627]: 2026-01-31T10:28:39Z|00069|binding|INFO|Setting lport 511d4760-4285-458e-a3eb-d3db966dc54c up in Southbound
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.184 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.193 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[858edb70-fd28-43be-885a-6db03ca94aeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.195 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf9c2056-51 in ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.198 238337 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf9c2056-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.198 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3d494d13-31b2-48e8-9826-5597050db3ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.200 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ac13d7-006d-43c8-a386-b57a6b013e93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 systemd-machined[156556]: New machine qemu-6-instance-00000006.
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.214 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[4f171155-a928-48d9-9c82-d3fc17ee39c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.239 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[fda3c27d-9125-40d6-b21e-370308c83a8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 systemd-udevd[250964]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:28:39 compute-0 NetworkManager[56281]: <info>  [1769855319.2596] device (tap511d4760-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 10:28:39 compute-0 NetworkManager[56281]: <info>  [1769855319.2610] device (tap511d4760-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.276 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b56f4c-79d8-4d9c-b896-ff8bb6d2bd3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 systemd-udevd[250968]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:28:39 compute-0 NetworkManager[56281]: <info>  [1769855319.2835] manager: (tapcf9c2056-50): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.282 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7b49d3-f7e9-48b6-88b9-8b85eadab25a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.314 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb9ea7-d63b-401e-86d5-b9b67902be9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.318 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[bad04e39-266f-4093-ade8-1a457d94f7f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 NetworkManager[56281]: <info>  [1769855319.3393] device (tapcf9c2056-50): carrier: link connected
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.343 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca7a4aa-b43b-4892-b7ff-146434d7187a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.360 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3f62b5ba-4b61-47ab-b5c1-56bc71900695]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf9c2056-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9c:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554082, 'reachable_time': 43869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250994, 'error': None, 'target': 'ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.373 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[da047fe8-5d88-4720-8440-75ebf04a1c10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9c32'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554082, 'tstamp': 554082}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250995, 'error': None, 'target': 'ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.392 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[ec668dd8-13c5-4d2f-8433-99ec319c2e42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf9c2056-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9c:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554082, 'reachable_time': 43869, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250997, 'error': None, 'target': 'ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.426 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb0976c-32e8-4fcb-9eb6-89abb1fbc0fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.500 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8dfb8c-9f23-4015-916a-a3179d23ac69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.503 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf9c2056-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.504 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.505 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf9c2056-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.508 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:39 compute-0 kernel: tapcf9c2056-50: entered promiscuous mode
Jan 31 10:28:39 compute-0 NetworkManager[56281]: <info>  [1769855319.5105] manager: (tapcf9c2056-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.511 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.512 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf9c2056-50, col_values=(('external_ids', {'iface-id': '4d83fe71-c54d-4f81-9d19-2ff646dae3ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:39 compute-0 ovn_controller[97627]: 2026-01-31T10:28:39Z|00070|binding|INFO|Releasing lport 4d83fe71-c54d-4f81-9d19-2ff646dae3ea from this chassis (sb_readonly=0)
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.514 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.517 106883 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf9c2056-599e-4c77-ba51-3ee29be3fa70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf9c2056-599e-4c77-ba51-3ee29be3fa70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.519 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[195a5ff1-e78d-4cb2-9e74-58aeeda3b7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.521 106883 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: global
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     log         /dev/log local0 debug
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     log-tag     haproxy-metadata-proxy-cf9c2056-599e-4c77-ba51-3ee29be3fa70
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     user        root
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     group       root
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     maxconn     1024
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     pidfile     /var/lib/neutron/external/pids/cf9c2056-599e-4c77-ba51-3ee29be3fa70.pid.haproxy
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     daemon
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: defaults
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     log global
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     mode http
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     option httplog
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     option dontlognull
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     option http-server-close
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     option forwardfor
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     retries                 3
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     timeout http-request    30s
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     timeout connect         30s
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     timeout client          32s
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     timeout server          32s
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     timeout http-keep-alive 30s
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: listen listener
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     bind 169.254.169.254:80
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:     http-request add-header X-OVN-Network-ID cf9c2056-599e-4c77-ba51-3ee29be3fa70
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 10:28:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:39.522 106883 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'env', 'PROCESS_TAG=haproxy-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf9c2056-599e-4c77-ba51-3ee29be3fa70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.522 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855319.5210493, 03e83c48-a567-468c-84c7-335a02ea7439 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.523 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] VM Started (Lifecycle Event)
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.525 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.594 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.604 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855319.521172, 03e83c48-a567-468c-84c7-335a02ea7439 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.605 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] VM Paused (Lifecycle Event)
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.651 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.660 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.694 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:28:39 compute-0 nova_compute[185194]: 2026-01-31 10:28:39.922 185198 DEBUG nova.network.neutron [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Successfully created port: 448bb25f-ea93-4b1e-84eb-c57864b3a306 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 10:28:39 compute-0 podman[251035]: 2026-01-31 10:28:39.988879357 +0000 UTC m=+0.080396863 container create 6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 10:28:40 compute-0 systemd[1]: Started libpod-conmon-6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498.scope.
Jan 31 10:28:40 compute-0 podman[251035]: 2026-01-31 10:28:39.948291251 +0000 UTC m=+0.039808837 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 10:28:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 10:28:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e639a9fb322e3989cb889c817226148fa9140060516a2ab3b7d445f7763a3279/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 10:28:40 compute-0 podman[251035]: 2026-01-31 10:28:40.104816142 +0000 UTC m=+0.196333748 container init 6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:28:40 compute-0 podman[251035]: 2026-01-31 10:28:40.116501558 +0000 UTC m=+0.208019094 container start 6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 10:28:40 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [NOTICE]   (251054) : New worker (251056) forked
Jan 31 10:28:40 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [NOTICE]   (251054) : Loading success.
Jan 31 10:28:40 compute-0 nova_compute[185194]: 2026-01-31 10:28:40.181 185198 DEBUG nova.compute.manager [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Received event network-changed-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:40 compute-0 nova_compute[185194]: 2026-01-31 10:28:40.191 185198 DEBUG nova.compute.manager [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Refreshing instance network info cache due to event network-changed-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:28:40 compute-0 nova_compute[185194]: 2026-01-31 10:28:40.192 185198 DEBUG oslo_concurrency.lockutils [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:40 compute-0 nova_compute[185194]: 2026-01-31 10:28:40.429 185198 DEBUG nova.network.neutron [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Updated VIF entry in instance network info cache for port 511d4760-4285-458e-a3eb-d3db966dc54c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:28:40 compute-0 nova_compute[185194]: 2026-01-31 10:28:40.431 185198 DEBUG nova.network.neutron [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Updating instance_info_cache with network_info: [{"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:40 compute-0 nova_compute[185194]: 2026-01-31 10:28:40.449 185198 DEBUG oslo_concurrency.lockutils [req-16134748-3c13-4d48-bbd0-d72df07f7dfa req-e40735a0-31f5-473a-8361-cb8ced587924 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-03e83c48-a567-468c-84c7-335a02ea7439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 10:28:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 10:28:41 compute-0 podman[251066]: 2026-01-31 10:28:41.128850985 +0000 UTC m=+0.105030988 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 31 10:28:41 compute-0 podman[251090]: 2026-01-31 10:28:41.14252742 +0000 UTC m=+0.077517843 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64)
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.311 185198 DEBUG nova.network.neutron [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.352 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.353 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Instance network_info: |[{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.354 185198 DEBUG oslo_concurrency.lockutils [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.355 185198 DEBUG nova.network.neutron [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Refreshing network info cache for port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.360 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Start _get_guest_xml network_info=[{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '5f1c614c-1ba8-4e34-915f-7078c46805eb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.371 185198 WARNING nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.381 185198 DEBUG nova.virt.libvirt.host [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.381 185198 DEBUG nova.virt.libvirt.host [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.386 185198 DEBUG nova.virt.libvirt.host [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.387 185198 DEBUG nova.virt.libvirt.host [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.388 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.388 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T10:27:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f3dcbeb5-bd7a-436b-a0c1-9d20fb387210',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.389 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.389 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.390 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.391 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.391 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.392 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.392 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.393 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.393 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.394 185198 DEBUG nova.virt.hardware [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.398 185198 DEBUG nova.virt.libvirt.vif [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:28:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2029379366',display_name='tempest-AttachInterfacesUnderV243Test-server-2029379366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2029379366',id=7,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORwrbeqSrLDPK33L6U61pmzrqJwk0WOZatDF9W5YcRdkq/CxYwLn8HMSOQX+Hi4GKHmm56QPogBYXJBCein4czcZMj/h9tT003vLQV6US4qtZy69/l6xGYyRGKuvL0Krw==',key_name='tempest-keypair-616319274',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='904e48d9dedd4c41a51e9b18681b22c2',ramdisk_id='',reservation_id='r-k3ec5es6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-967703146',owner_user_name='tempest-AttachInterfacesUnderV243Test-967703146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:28:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2205cfa3b87343af99106de256070375',uuid=2c7f9a83-17b3-4e0f-8936-9e6a19920064,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.398 185198 DEBUG nova.network.os_vif_util [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Converting VIF {"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.399 185198 DEBUG nova.network.os_vif_util [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:66:1f,bridge_name='br-int',has_traffic_filtering=True,id=b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce,network=Network(42bce1b2-79a6-4f08-8713-7d1e88cff865),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2cd8b2e-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.401 185198 DEBUG nova.objects.instance [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c7f9a83-17b3-4e0f-8936-9e6a19920064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.444 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <uuid>2c7f9a83-17b3-4e0f-8936-9e6a19920064</uuid>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <name>instance-00000007</name>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <memory>131072</memory>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-2029379366</nova:name>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:28:41</nova:creationTime>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:flavor name="m1.nano">
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:memory>128</nova:memory>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:user uuid="2205cfa3b87343af99106de256070375">tempest-AttachInterfacesUnderV243Test-967703146-project-member</nova:user>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:project uuid="904e48d9dedd4c41a51e9b18681b22c2">tempest-AttachInterfacesUnderV243Test-967703146</nova:project>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="5f1c614c-1ba8-4e34-915f-7078c46805eb"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         <nova:port uuid="b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce">
Jan 31 10:28:41 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <system>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <entry name="serial">2c7f9a83-17b3-4e0f-8936-9e6a19920064</entry>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <entry name="uuid">2c7f9a83-17b3-4e0f-8936-9e6a19920064</entry>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </system>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <os>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </os>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <features>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </features>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.config"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:de:66:1f"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <target dev="tapb2cd8b2e-64"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </interface>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/console.log" append="off"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <video>
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </video>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:28:41 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:28:41 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:28:41 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:28:41 compute-0 nova_compute[185194]: </domain>
Jan 31 10:28:41 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.445 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Preparing to wait for external event network-vif-plugged-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.446 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.446 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.447 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.448 185198 DEBUG nova.virt.libvirt.vif [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:28:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2029379366',display_name='tempest-AttachInterfacesUnderV243Test-server-2029379366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2029379366',id=7,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORwrbeqSrLDPK33L6U61pmzrqJwk0WOZatDF9W5YcRdkq/CxYwLn8HMSOQX+Hi4GKHmm56QPogBYXJBCein4czcZMj/h9tT003vLQV6US4qtZy69/l6xGYyRGKuvL0Krw==',key_name='tempest-keypair-616319274',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='904e48d9dedd4c41a51e9b18681b22c2',ramdisk_id='',reservation_id='r-k3ec5es6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-967703146',owner_user_name='tempest-AttachInterfacesUnderV243Test-967703146-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:28:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2205cfa3b87343af99106de256070375',uuid=2c7f9a83-17b3-4e0f-8936-9e6a19920064,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.448 185198 DEBUG nova.network.os_vif_util [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Converting VIF {"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.449 185198 DEBUG nova.network.os_vif_util [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:66:1f,bridge_name='br-int',has_traffic_filtering=True,id=b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce,network=Network(42bce1b2-79a6-4f08-8713-7d1e88cff865),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2cd8b2e-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.450 185198 DEBUG os_vif [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:66:1f,bridge_name='br-int',has_traffic_filtering=True,id=b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce,network=Network(42bce1b2-79a6-4f08-8713-7d1e88cff865),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2cd8b2e-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.451 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.452 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.452 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.456 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.457 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2cd8b2e-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.458 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2cd8b2e-64, col_values=(('external_ids', {'iface-id': 'b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:66:1f', 'vm-uuid': '2c7f9a83-17b3-4e0f-8936-9e6a19920064'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.460 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:41 compute-0 NetworkManager[56281]: <info>  [1769855321.4621] manager: (tapb2cd8b2e-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.463 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.472 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.474 185198 INFO os_vif [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:66:1f,bridge_name='br-int',has_traffic_filtering=True,id=b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce,network=Network(42bce1b2-79a6-4f08-8713-7d1e88cff865),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2cd8b2e-64')
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.573 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.574 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.574 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] No VIF found with MAC fa:16:3e:de:66:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 10:28:41 compute-0 nova_compute[185194]: 2026-01-31 10:28:41.575 185198 INFO nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Using config drive
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.070 185198 DEBUG nova.network.neutron [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Successfully updated port: 448bb25f-ea93-4b1e-84eb-c57864b3a306 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.093 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.093 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquired lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.093 185198 DEBUG nova.network.neutron [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.568 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.664 185198 DEBUG nova.network.neutron [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.714 185198 INFO nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Creating config drive at /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.config
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.723 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz2upsas7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.852 185198 DEBUG oslo_concurrency.processutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz2upsas7" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:42 compute-0 kernel: tapb2cd8b2e-64: entered promiscuous mode
Jan 31 10:28:42 compute-0 NetworkManager[56281]: <info>  [1769855322.9247] manager: (tapb2cd8b2e-64): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 31 10:28:42 compute-0 ovn_controller[97627]: 2026-01-31T10:28:42Z|00071|binding|INFO|Claiming lport b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce for this chassis.
Jan 31 10:28:42 compute-0 ovn_controller[97627]: 2026-01-31T10:28:42Z|00072|binding|INFO|b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce: Claiming fa:16:3e:de:66:1f 10.100.0.8
Jan 31 10:28:42 compute-0 nova_compute[185194]: 2026-01-31 10:28:42.926 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.097 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:66:1f 10.100.0.8'], port_security=['fa:16:3e:de:66:1f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2c7f9a83-17b3-4e0f-8936-9e6a19920064', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42bce1b2-79a6-4f08-8713-7d1e88cff865', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '904e48d9dedd4c41a51e9b18681b22c2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92afca8c-558a-440b-bafd-9264c4127e10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76b5dbef-4d4b-4066-a72e-fbd675da6768, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:28:43 compute-0 systemd-udevd[251142]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.100 106883 INFO neutron.agent.ovn.metadata.agent [-] Port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce in datapath 42bce1b2-79a6-4f08-8713-7d1e88cff865 bound to our chassis
Jan 31 10:28:43 compute-0 ovn_controller[97627]: 2026-01-31T10:28:43Z|00073|binding|INFO|Setting lport b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce up in Southbound
Jan 31 10:28:43 compute-0 ovn_controller[97627]: 2026-01-31T10:28:43Z|00074|binding|INFO|Setting lport b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce ovn-installed in OVS
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.103 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 42bce1b2-79a6-4f08-8713-7d1e88cff865
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.114 185198 DEBUG nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.114 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.115 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.115 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.115 185198 DEBUG nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Processing event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.117 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb473ba-d4a5-4231-b89c-3924592afe5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.119 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap42bce1b2-71 in ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.121 238337 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap42bce1b2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.121 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[ec04e59e-c593-4cdc-a511-e0775a8d3544]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 NetworkManager[56281]: <info>  [1769855323.1230] device (tapb2cd8b2e-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.123 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[600f1836-cf30-400e-9a1a-ada9b2f09a82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.116 185198 DEBUG nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.117 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.118 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.118 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.119 185198 DEBUG nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] No waiting events found dispatching network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.119 185198 WARNING nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received unexpected event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c for instance with vm_state building and task_state spawning.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.120 185198 DEBUG nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-changed-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.120 185198 DEBUG nova.compute.manager [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Refreshing instance network info cache due to event network-changed-448bb25f-ea93-4b1e-84eb-c57864b3a306. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.120 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.121 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.123 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:28:43 compute-0 NetworkManager[56281]: <info>  [1769855323.1295] device (tapb2cd8b2e-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.131 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855323.1313038, 03e83c48-a567-468c-84c7-335a02ea7439 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.132 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] VM Resumed (Lifecycle Event)
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.136 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5c0c28-a5b1-4014-96ae-63ef792cb44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.139 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.145 185198 INFO nova.virt.libvirt.driver [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Instance spawned successfully.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.145 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:28:43 compute-0 systemd-machined[156556]: New machine qemu-7-instance-00000007.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.157 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.162 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.164 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[9c756528-6976-47a0-ac13-e59820a559cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.174 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.174 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.174 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.175 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.175 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.176 185198 DEBUG nova.virt.libvirt.driver [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.200 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[a462b40e-3ff1-4c21-8574-2be523b5273a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.203 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:28:43 compute-0 systemd-udevd[251147]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.220 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[aed1fb6f-6937-4a75-8bba-25379ce1d641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 NetworkManager[56281]: <info>  [1769855323.2218] manager: (tap42bce1b2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.253 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[956c3652-90ce-459d-aeaa-ed35b796a52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.257 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe7f5ad-e0b5-466e-bb5a-9f28c9a74b9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 NetworkManager[56281]: <info>  [1769855323.2801] device (tap42bce1b2-70): carrier: link connected
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.287 185198 INFO nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Took 9.82 seconds to spawn the instance on the hypervisor.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.287 185198 DEBUG nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.288 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f83bc5-ff85-4124-a27d-657a412117b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.304 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c51bf3-9c43-42a5-807a-37620a605b1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap42bce1b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:a1:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554476, 'reachable_time': 20161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251178, 'error': None, 'target': 'ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.319 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[cac1f942-e069-4f33-bb28-d21ad5a2c11a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:a174'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554476, 'tstamp': 554476}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251179, 'error': None, 'target': 'ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.333 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f59839d6-21bf-4b21-80a7-d71ab7d59867]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap42bce1b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:a1:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554476, 'reachable_time': 20161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251180, 'error': None, 'target': 'ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.367 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[ccea0353-c7fa-4c02-8123-a8a3b9413dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.372 185198 INFO nova.compute.manager [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Took 10.48 seconds to build instance.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.397 185198 DEBUG oslo_concurrency.lockutils [None req-b729342e-bcd6-42e7-bfae-b7f80a76da0c 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.432 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a4b1a0-4ad7-47c9-a944-9daceed5395e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.434 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42bce1b2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.435 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.435 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42bce1b2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:43 compute-0 kernel: tap42bce1b2-70: entered promiscuous mode
Jan 31 10:28:43 compute-0 NetworkManager[56281]: <info>  [1769855323.4416] manager: (tap42bce1b2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.445 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap42bce1b2-70, col_values=(('external_ids', {'iface-id': '27135e8d-d657-4300-a1f6-02bc2c0e93f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:43 compute-0 ovn_controller[97627]: 2026-01-31T10:28:43Z|00075|binding|INFO|Releasing lport 27135e8d-d657-4300-a1f6-02bc2c0e93f0 from this chassis (sb_readonly=0)
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.451 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.459 106883 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/42bce1b2-79a6-4f08-8713-7d1e88cff865.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/42bce1b2-79a6-4f08-8713-7d1e88cff865.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.460 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.461 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c90c508f-60cf-48c0-a394-8b799027ee0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.462 106883 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: global
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     log         /dev/log local0 debug
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     log-tag     haproxy-metadata-proxy-42bce1b2-79a6-4f08-8713-7d1e88cff865
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     user        root
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     group       root
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     maxconn     1024
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     pidfile     /var/lib/neutron/external/pids/42bce1b2-79a6-4f08-8713-7d1e88cff865.pid.haproxy
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     daemon
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: defaults
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     log global
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     mode http
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     option httplog
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     option dontlognull
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     option http-server-close
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     option forwardfor
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     retries                 3
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     timeout http-request    30s
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     timeout connect         30s
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     timeout client          32s
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     timeout server          32s
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     timeout http-keep-alive 30s
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: listen listener
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     bind 169.254.169.254:80
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:     http-request add-header X-OVN-Network-ID 42bce1b2-79a6-4f08-8713-7d1e88cff865
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 10:28:43 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:43.465 106883 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865', 'env', 'PROCESS_TAG=haproxy-42bce1b2-79a6-4f08-8713-7d1e88cff865', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/42bce1b2-79a6-4f08-8713-7d1e88cff865.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.705 185198 DEBUG nova.compute.manager [req-a9d40cb5-42af-4941-aa82-9e0f74c9d58c req-2cbaa182-da57-4b5e-91fc-48172d19ef4e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Received event network-vif-plugged-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.705 185198 DEBUG oslo_concurrency.lockutils [req-a9d40cb5-42af-4941-aa82-9e0f74c9d58c req-2cbaa182-da57-4b5e-91fc-48172d19ef4e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.705 185198 DEBUG oslo_concurrency.lockutils [req-a9d40cb5-42af-4941-aa82-9e0f74c9d58c req-2cbaa182-da57-4b5e-91fc-48172d19ef4e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.713 185198 DEBUG oslo_concurrency.lockutils [req-a9d40cb5-42af-4941-aa82-9e0f74c9d58c req-2cbaa182-da57-4b5e-91fc-48172d19ef4e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.713 185198 DEBUG nova.compute.manager [req-a9d40cb5-42af-4941-aa82-9e0f74c9d58c req-2cbaa182-da57-4b5e-91fc-48172d19ef4e cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Processing event network-vif-plugged-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.811 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.813 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855323.8119066, 2c7f9a83-17b3-4e0f-8936-9e6a19920064 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.814 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] VM Started (Lifecycle Event)
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.817 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.823 185198 INFO nova.virt.libvirt.driver [-] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Instance spawned successfully.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.823 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.857 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.862 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.871 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.871 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.872 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.873 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.874 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.875 185198 DEBUG nova.virt.libvirt.driver [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.890 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:28:43 compute-0 podman[251219]: 2026-01-31 10:28:43.890204193 +0000 UTC m=+0.084625497 container create 04b84533c47fd73179eabac38e04b00538ec677184986e43bebb0f67d17f828f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.891 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855323.81201, 2c7f9a83-17b3-4e0f-8936-9e6a19920064 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.891 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] VM Paused (Lifecycle Event)
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.930 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:43 compute-0 systemd[1]: Started libpod-conmon-04b84533c47fd73179eabac38e04b00538ec677184986e43bebb0f67d17f828f.scope.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.936 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855323.817051, 2c7f9a83-17b3-4e0f-8936-9e6a19920064 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.937 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] VM Resumed (Lifecycle Event)
Jan 31 10:28:43 compute-0 podman[251219]: 2026-01-31 10:28:43.844026074 +0000 UTC m=+0.038447398 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.940 185198 INFO nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Took 9.05 seconds to spawn the instance on the hypervisor.
Jan 31 10:28:43 compute-0 nova_compute[185194]: 2026-01-31 10:28:43.941 185198 DEBUG nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 10:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f33dab4ba51ae38cc2b9763e8db964f598e8dde4e95d44106803f6dd776b08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 10:28:44 compute-0 podman[251219]: 2026-01-31 10:28:44.030807179 +0000 UTC m=+0.225228503 container init 04b84533c47fd73179eabac38e04b00538ec677184986e43bebb0f67d17f828f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 10:28:44 compute-0 podman[251219]: 2026-01-31 10:28:44.038421956 +0000 UTC m=+0.232843280 container start 04b84533c47fd73179eabac38e04b00538ec677184986e43bebb0f67d17f828f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 10:28:44 compute-0 neutron-haproxy-ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865[251234]: [NOTICE]   (251238) : New worker (251240) forked
Jan 31 10:28:44 compute-0 neutron-haproxy-ovnmeta-42bce1b2-79a6-4f08-8713-7d1e88cff865[251234]: [NOTICE]   (251238) : Loading success.
Jan 31 10:28:44 compute-0 nova_compute[185194]: 2026-01-31 10:28:44.195 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:44 compute-0 nova_compute[185194]: 2026-01-31 10:28:44.202 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:28:44 compute-0 nova_compute[185194]: 2026-01-31 10:28:44.230 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:28:44 compute-0 nova_compute[185194]: 2026-01-31 10:28:44.283 185198 INFO nova.compute.manager [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Took 10.05 seconds to build instance.
Jan 31 10:28:44 compute-0 nova_compute[185194]: 2026-01-31 10:28:44.315 185198 DEBUG oslo_concurrency.lockutils [None req-a53530b5-c9ce-4c2f-8aa7-9abd99ff6ef4 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.367 185198 DEBUG nova.network.neutron [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Updating instance_info_cache with network_info: [{"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.375 185198 DEBUG nova.network.neutron [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updated VIF entry in instance network info cache for port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.376 185198 DEBUG nova.network.neutron [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.404 185198 DEBUG oslo_concurrency.lockutils [req-d6137150-43c0-4232-b4e6-dfaab188b37d req-3ad217d4-62d2-46b0-a883-16935268443d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.419 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Releasing lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.420 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Instance network_info: |[{"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.420 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.421 185198 DEBUG nova.network.neutron [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Refreshing network info cache for port 448bb25f-ea93-4b1e-84eb-c57864b3a306 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.426 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Start _get_guest_xml network_info=[{"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '5f1c614c-1ba8-4e34-915f-7078c46805eb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.434 185198 WARNING nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.442 185198 DEBUG nova.virt.libvirt.host [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.443 185198 DEBUG nova.virt.libvirt.host [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.455 185198 DEBUG nova.virt.libvirt.host [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.457 185198 DEBUG nova.virt.libvirt.host [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.457 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.458 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T10:27:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f3dcbeb5-bd7a-436b-a0c1-9d20fb387210',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.459 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.459 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.459 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.460 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.460 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.460 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.461 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.461 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.461 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.462 185198 DEBUG nova.virt.hardware [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.468 185198 DEBUG nova.virt.libvirt.vif [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1443151018',display_name='tempest-ServersTestJSON-server-1443151018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1443151018',id=8,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMPEIYmZ47TZ0+6Z2Dj6lZp2i6ERU696xuqHSNUsBN1HYIE0fiZpFZOgID4XilbfdoVoPKjrrY+lu05Af7+rIY8o/UWo7Hp53D9NHlYYqr9CCGsxTfITJ1Wdyk+SHSxzA==',key_name='tempest-keypair-1612973930',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c8ba4707564805be9a18429ec92962',ramdisk_id='',reservation_id='r-h8yw8iws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1356590764',owner_user_name='tempest-ServersTestJSON-1356590764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:28:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8cec4e2f60d242508ce87cc6af1eea13',uuid=6a307f4f-f019-4927-9da3-50e4a4aec6f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.469 185198 DEBUG nova.network.os_vif_util [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Converting VIF {"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.470 185198 DEBUG nova.network.os_vif_util [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.471 185198 DEBUG nova.objects.instance [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a307f4f-f019-4927-9da3-50e4a4aec6f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.502 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <uuid>6a307f4f-f019-4927-9da3-50e4a4aec6f3</uuid>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <name>instance-00000008</name>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <memory>131072</memory>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:name>tempest-ServersTestJSON-server-1443151018</nova:name>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:28:45</nova:creationTime>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:flavor name="m1.nano">
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:memory>128</nova:memory>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:user uuid="8cec4e2f60d242508ce87cc6af1eea13">tempest-ServersTestJSON-1356590764-project-member</nova:user>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:project uuid="e7c8ba4707564805be9a18429ec92962">tempest-ServersTestJSON-1356590764</nova:project>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="5f1c614c-1ba8-4e34-915f-7078c46805eb"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         <nova:port uuid="448bb25f-ea93-4b1e-84eb-c57864b3a306">
Jan 31 10:28:45 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <system>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <entry name="serial">6a307f4f-f019-4927-9da3-50e4a4aec6f3</entry>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <entry name="uuid">6a307f4f-f019-4927-9da3-50e4a4aec6f3</entry>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </system>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <os>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </os>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <features>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </features>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.config"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:42:08:c8"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <target dev="tap448bb25f-ea"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </interface>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/console.log" append="off"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <video>
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </video>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:28:45 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:28:45 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:28:45 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:28:45 compute-0 nova_compute[185194]: </domain>
Jan 31 10:28:45 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.503 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Preparing to wait for external event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.503 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.503 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.504 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.504 185198 DEBUG nova.virt.libvirt.vif [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1443151018',display_name='tempest-ServersTestJSON-server-1443151018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1443151018',id=8,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMPEIYmZ47TZ0+6Z2Dj6lZp2i6ERU696xuqHSNUsBN1HYIE0fiZpFZOgID4XilbfdoVoPKjrrY+lu05Af7+rIY8o/UWo7Hp53D9NHlYYqr9CCGsxTfITJ1Wdyk+SHSxzA==',key_name='tempest-keypair-1612973930',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e7c8ba4707564805be9a18429ec92962',ramdisk_id='',reservation_id='r-h8yw8iws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1356590764',owner_user_name='tempest-ServersTestJSON-1356590764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:28:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8cec4e2f60d242508ce87cc6af1eea13',uuid=6a307f4f-f019-4927-9da3-50e4a4aec6f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.505 185198 DEBUG nova.network.os_vif_util [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Converting VIF {"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.505 185198 DEBUG nova.network.os_vif_util [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.506 185198 DEBUG os_vif [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.506 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.507 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.507 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.511 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.511 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap448bb25f-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.511 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap448bb25f-ea, col_values=(('external_ids', {'iface-id': '448bb25f-ea93-4b1e-84eb-c57864b3a306', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:08:c8', 'vm-uuid': '6a307f4f-f019-4927-9da3-50e4a4aec6f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.514 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:45 compute-0 NetworkManager[56281]: <info>  [1769855325.5154] manager: (tap448bb25f-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.516 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.526 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.527 185198 INFO os_vif [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea')
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.633 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.634 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.634 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] No VIF found with MAC fa:16:3e:42:08:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 10:28:45 compute-0 nova_compute[185194]: 2026-01-31 10:28:45.635 185198 INFO nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Using config drive
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.699 185198 DEBUG nova.compute.manager [req-d68237e8-29b3-498a-b50f-30a5e0325c66 req-b5963fe7-0770-4dcc-bb1b-30610b25675d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Received event network-vif-plugged-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.699 185198 DEBUG oslo_concurrency.lockutils [req-d68237e8-29b3-498a-b50f-30a5e0325c66 req-b5963fe7-0770-4dcc-bb1b-30610b25675d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.700 185198 DEBUG oslo_concurrency.lockutils [req-d68237e8-29b3-498a-b50f-30a5e0325c66 req-b5963fe7-0770-4dcc-bb1b-30610b25675d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.700 185198 DEBUG oslo_concurrency.lockutils [req-d68237e8-29b3-498a-b50f-30a5e0325c66 req-b5963fe7-0770-4dcc-bb1b-30610b25675d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.701 185198 DEBUG nova.compute.manager [req-d68237e8-29b3-498a-b50f-30a5e0325c66 req-b5963fe7-0770-4dcc-bb1b-30610b25675d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] No waiting events found dispatching network-vif-plugged-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.701 185198 WARNING nova.compute.manager [req-d68237e8-29b3-498a-b50f-30a5e0325c66 req-b5963fe7-0770-4dcc-bb1b-30610b25675d cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Received unexpected event network-vif-plugged-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce for instance with vm_state active and task_state None.
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.867 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.868 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.868 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.869 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.869 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.871 185198 INFO nova.compute.manager [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Terminating instance
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.873 185198 DEBUG nova.compute.manager [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:28:46 compute-0 kernel: tap511d4760-42 (unregistering): left promiscuous mode
Jan 31 10:28:46 compute-0 NetworkManager[56281]: <info>  [1769855326.9245] device (tap511d4760-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.925 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:46 compute-0 ovn_controller[97627]: 2026-01-31T10:28:46Z|00076|binding|INFO|Releasing lport 511d4760-4285-458e-a3eb-d3db966dc54c from this chassis (sb_readonly=0)
Jan 31 10:28:46 compute-0 ovn_controller[97627]: 2026-01-31T10:28:46Z|00077|binding|INFO|Setting lport 511d4760-4285-458e-a3eb-d3db966dc54c down in Southbound
Jan 31 10:28:46 compute-0 ovn_controller[97627]: 2026-01-31T10:28:46Z|00078|binding|INFO|Removing iface tap511d4760-42 ovn-installed in OVS
Jan 31 10:28:46 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:46.941 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:bb:c9 10.100.0.10'], port_security=['fa:16:3e:28:bb:c9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '03e83c48-a567-468c-84c7-335a02ea7439', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '146dcb55f281466fa2f94bac5029431f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1902f3bc-e606-4f80-95f7-06e56cabbcd1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e79fc86-bb13-4bb1-9f7b-4e8c9020ea15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=511d4760-4285-458e-a3eb-d3db966dc54c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.943 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:46 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:46.942 106883 INFO neutron.agent.ovn.metadata.agent [-] Port 511d4760-4285-458e-a3eb-d3db966dc54c in datapath cf9c2056-599e-4c77-ba51-3ee29be3fa70 unbound from our chassis
Jan 31 10:28:46 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:46.944 106883 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf9c2056-599e-4c77-ba51-3ee29be3fa70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 10:28:46 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:46.945 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[6374ecb1-1e87-476b-8cb8-fbe98be37557]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:46 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:46.945 106883 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70 namespace which is not needed anymore
Jan 31 10:28:46 compute-0 nova_compute[185194]: 2026-01-31 10:28:46.947 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:46 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 31 10:28:46 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 4.323s CPU time.
Jan 31 10:28:46 compute-0 systemd-machined[156556]: Machine qemu-6-instance-00000006 terminated.
Jan 31 10:28:47 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [NOTICE]   (251054) : haproxy version is 2.8.14-c23fe91
Jan 31 10:28:47 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [NOTICE]   (251054) : path to executable is /usr/sbin/haproxy
Jan 31 10:28:47 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [WARNING]  (251054) : Exiting Master process...
Jan 31 10:28:47 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [ALERT]    (251054) : Current worker (251056) exited with code 143 (Terminated)
Jan 31 10:28:47 compute-0 neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70[251050]: [WARNING]  (251054) : All workers exited. Exiting... (0)
Jan 31 10:28:47 compute-0 systemd[1]: libpod-6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498.scope: Deactivated successfully.
Jan 31 10:28:47 compute-0 podman[251278]: 2026-01-31 10:28:47.132563615 +0000 UTC m=+0.085220721 container died 6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.172 185198 INFO nova.virt.libvirt.driver [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Instance destroyed successfully.
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.173 185198 DEBUG nova.objects.instance [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lazy-loading 'resources' on Instance uuid 03e83c48-a567-468c-84c7-335a02ea7439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498-userdata-shm.mount: Deactivated successfully.
Jan 31 10:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e639a9fb322e3989cb889c817226148fa9140060516a2ab3b7d445f7763a3279-merged.mount: Deactivated successfully.
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.197 185198 DEBUG nova.virt.libvirt.vif [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T10:28:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1272167923',display_name='tempest-ServerAddressesTestJSON-server-1272167923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1272167923',id=6,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T10:28:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='146dcb55f281466fa2f94bac5029431f',ramdisk_id='',reservation_id='r-jvummq5l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1224911199',owner_user_name='tempest-ServerAddressesTestJSON-1224911199-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T10:28:43Z,user_data=None,user_id='7569d7689502423193b6d841d1f880c0',uuid=03e83c48-a567-468c-84c7-335a02ea7439,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.197 185198 DEBUG nova.network.os_vif_util [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Converting VIF {"id": "511d4760-4285-458e-a3eb-d3db966dc54c", "address": "fa:16:3e:28:bb:c9", "network": {"id": "cf9c2056-599e-4c77-ba51-3ee29be3fa70", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-917441389-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "146dcb55f281466fa2f94bac5029431f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap511d4760-42", "ovs_interfaceid": "511d4760-4285-458e-a3eb-d3db966dc54c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:47 compute-0 podman[251278]: 2026-01-31 10:28:47.198703345 +0000 UTC m=+0.151360441 container cleanup 6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.198 185198 DEBUG nova.network.os_vif_util [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.199 185198 DEBUG os_vif [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.201 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.202 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap511d4760-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.208 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.210 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.210 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.213 185198 INFO os_vif [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:bb:c9,bridge_name='br-int',has_traffic_filtering=True,id=511d4760-4285-458e-a3eb-d3db966dc54c,network=Network(cf9c2056-599e-4c77-ba51-3ee29be3fa70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap511d4760-42')
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.214 185198 INFO nova.virt.libvirt.driver [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Deleting instance files /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439_del
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.214 185198 INFO nova.virt.libvirt.driver [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Deletion of /var/lib/nova/instances/03e83c48-a567-468c-84c7-335a02ea7439_del complete
Jan 31 10:28:47 compute-0 systemd[1]: libpod-conmon-6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498.scope: Deactivated successfully.
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.259 185198 INFO nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Creating config drive at /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.config
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.265 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpj8z02sc8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.305 185198 INFO nova.compute.manager [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Took 0.43 seconds to destroy the instance on the hypervisor.
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.306 185198 DEBUG oslo.service.loopingcall [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.307 185198 DEBUG nova.compute.manager [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.307 185198 DEBUG nova.network.neutron [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:28:47 compute-0 podman[251325]: 2026-01-31 10:28:47.307294522 +0000 UTC m=+0.070415527 container remove 6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.315 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[93cee25a-c527-4d89-8e40-06313e41d6e8]: (4, ('Sat Jan 31 10:28:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70 (6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498)\n6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498\nSat Jan 31 10:28:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70 (6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498)\n6b6b43decc7994c12c494a1b37afd625f6da56fca7d400bcfda45df5a1b73498\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.318 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc125a6-f011-41ab-bb1b-d5e6e79dc416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.320 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf9c2056-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.323 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 kernel: tapcf9c2056-50: left promiscuous mode
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.344 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.346 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5233fc-5524-4ece-8f46-963e1a7da613]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.348 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.370 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[1892d489-6281-4ddb-83c6-d88c6a994e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.372 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[4113d75c-fd00-43f8-9d30-edb2a2e3ebfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.384 185198 DEBUG oslo_concurrency.processutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpj8z02sc8" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.392 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[1552712a-daa3-45ca-948f-0f847e8b7bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554076, 'reachable_time': 31323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251345, 'error': None, 'target': 'ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf9c2056\x2d599e\x2d4c77\x2dba51\x2d3ee29be3fa70.mount: Deactivated successfully.
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.403 107396 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf9c2056-599e-4c77-ba51-3ee29be3fa70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.403 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb4bb4c-02e2-427b-a117-e31d93c7cb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 NetworkManager[56281]: <info>  [1769855327.4561] manager: (tap448bb25f-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 31 10:28:47 compute-0 kernel: tap448bb25f-ea: entered promiscuous mode
Jan 31 10:28:47 compute-0 systemd-udevd[251259]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.458 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_controller[97627]: 2026-01-31T10:28:47Z|00079|binding|INFO|Claiming lport 448bb25f-ea93-4b1e-84eb-c57864b3a306 for this chassis.
Jan 31 10:28:47 compute-0 ovn_controller[97627]: 2026-01-31T10:28:47Z|00080|binding|INFO|448bb25f-ea93-4b1e-84eb-c57864b3a306: Claiming fa:16:3e:42:08:c8 10.100.0.8
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.461 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 NetworkManager[56281]: <info>  [1769855327.4744] device (tap448bb25f-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.474 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:08:c8 10.100.0.8'], port_security=['fa:16:3e:42:08:c8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6a307f4f-f019-4927-9da3-50e4a4aec6f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c8ba4707564805be9a18429ec92962', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6c380d6-c866-43c6-8257-9acfaa43b3bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82bc447-71d9-47d2-88bd-dee1db3c8666, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=448bb25f-ea93-4b1e-84eb-c57864b3a306) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:28:47 compute-0 NetworkManager[56281]: <info>  [1769855327.4758] device (tap448bb25f-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 10:28:47 compute-0 ovn_controller[97627]: 2026-01-31T10:28:47Z|00081|binding|INFO|Setting lport 448bb25f-ea93-4b1e-84eb-c57864b3a306 ovn-installed in OVS
Jan 31 10:28:47 compute-0 ovn_controller[97627]: 2026-01-31T10:28:47Z|00082|binding|INFO|Setting lport 448bb25f-ea93-4b1e-84eb-c57864b3a306 up in Southbound
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.480 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.482 106883 INFO neutron.agent.ovn.metadata.agent [-] Port 448bb25f-ea93-4b1e-84eb-c57864b3a306 in datapath 48aa712a-2069-4774-86bb-f2b26b8d0de8 bound to our chassis
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.484 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.490 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48aa712a-2069-4774-86bb-f2b26b8d0de8
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.502 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[9f13715c-7092-42fd-a458-7a9043acf505]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.504 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48aa712a-21 in ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.507 238337 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48aa712a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.507 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[821816b1-fca2-42d3-8da5-2ede932f3d5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.515 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[e646add8-9089-4430-a734-650887187839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 systemd-machined[156556]: New machine qemu-8-instance-00000008.
Jan 31 10:28:47 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.535 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac88554-9042-4c61-bac3-1ed21389fa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.549 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[263fb679-c38a-447b-ac3b-1a34e4f8c460]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.571 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.582 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[378ab924-b9b4-4a64-a6e5-b2a6896553ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 NetworkManager[56281]: <info>  [1769855327.5964] manager: (tap48aa712a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.588 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[d2dcc855-c2e9-4f36-9e56-50718834d0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.636 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[e296d92b-8b88-434b-b99f-ba61fc908670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.641 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[e19b58fc-2255-42e7-a56c-8e6990b4c3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 NetworkManager[56281]: <info>  [1769855327.6710] device (tap48aa712a-20): carrier: link connected
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.674 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[3f928f09-1cae-4fff-ab01-4d833936421f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.689 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea089ca-a93e-47db-a01d-991256b49121]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48aa712a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:69:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554915, 'reachable_time': 18397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251391, 'error': None, 'target': 'ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.702 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[40636c01-af6e-4be6-898a-7c1555e4a67a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:6933'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554915, 'tstamp': 554915}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251392, 'error': None, 'target': 'ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.716 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c8117725-080f-4719-b976-a65eadfa9714]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48aa712a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:69:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554915, 'reachable_time': 18397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251393, 'error': None, 'target': 'ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.740 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[841aeb68-f7d4-4298-bb27-c3ffb0e0270d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.766 185198 DEBUG nova.compute.manager [req-75002a73-1604-4ecf-aa97-a3edb1c6fecb req-8ee9471e-2302-4cce-8deb-e204ef8c3001 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-vif-unplugged-511d4760-4285-458e-a3eb-d3db966dc54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.767 185198 DEBUG oslo_concurrency.lockutils [req-75002a73-1604-4ecf-aa97-a3edb1c6fecb req-8ee9471e-2302-4cce-8deb-e204ef8c3001 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.767 185198 DEBUG oslo_concurrency.lockutils [req-75002a73-1604-4ecf-aa97-a3edb1c6fecb req-8ee9471e-2302-4cce-8deb-e204ef8c3001 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.768 185198 DEBUG oslo_concurrency.lockutils [req-75002a73-1604-4ecf-aa97-a3edb1c6fecb req-8ee9471e-2302-4cce-8deb-e204ef8c3001 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.768 185198 DEBUG nova.compute.manager [req-75002a73-1604-4ecf-aa97-a3edb1c6fecb req-8ee9471e-2302-4cce-8deb-e204ef8c3001 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] No waiting events found dispatching network-vif-unplugged-511d4760-4285-458e-a3eb-d3db966dc54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.768 185198 DEBUG nova.compute.manager [req-75002a73-1604-4ecf-aa97-a3edb1c6fecb req-8ee9471e-2302-4cce-8deb-e204ef8c3001 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-vif-unplugged-511d4760-4285-458e-a3eb-d3db966dc54c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.825 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[7c09c152-66c3-4f9e-96dd-1e89bc8f22a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.827 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48aa712a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.828 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.828 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48aa712a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.831 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 NetworkManager[56281]: <info>  [1769855327.8338] manager: (tap48aa712a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 31 10:28:47 compute-0 kernel: tap48aa712a-20: entered promiscuous mode
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.837 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.844 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48aa712a-20, col_values=(('external_ids', {'iface-id': 'ed331b8c-7632-42db-950b-31f3fda9d7ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.847 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_controller[97627]: 2026-01-31T10:28:47Z|00083|binding|INFO|Releasing lport ed331b8c-7632-42db-950b-31f3fda9d7ab from this chassis (sb_readonly=0)
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.848 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.856 106883 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48aa712a-2069-4774-86bb-f2b26b8d0de8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48aa712a-2069-4774-86bb-f2b26b8d0de8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 10:28:47 compute-0 nova_compute[185194]: 2026-01-31 10:28:47.857 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.858 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[d640698d-dae8-4bcd-becd-11b66e259723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.859 106883 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: global
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     log         /dev/log local0 debug
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     log-tag     haproxy-metadata-proxy-48aa712a-2069-4774-86bb-f2b26b8d0de8
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     user        root
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     group       root
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     maxconn     1024
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     pidfile     /var/lib/neutron/external/pids/48aa712a-2069-4774-86bb-f2b26b8d0de8.pid.haproxy
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     daemon
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: defaults
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     log global
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     mode http
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     option httplog
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     option dontlognull
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     option http-server-close
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     option forwardfor
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     retries                 3
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     timeout http-request    30s
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     timeout connect         30s
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     timeout client          32s
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     timeout server          32s
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     timeout http-keep-alive 30s
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: listen listener
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     bind 169.254.169.254:80
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:     http-request add-header X-OVN-Network-ID 48aa712a-2069-4774-86bb-f2b26b8d0de8
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 10:28:47 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:47.860 106883 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'env', 'PROCESS_TAG=haproxy-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48aa712a-2069-4774-86bb-f2b26b8d0de8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.423 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855328.416343, 6a307f4f-f019-4927-9da3-50e4a4aec6f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.424 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] VM Started (Lifecycle Event)
Jan 31 10:28:48 compute-0 podman[251430]: 2026-01-31 10:28:48.456669804 +0000 UTC m=+0.086913353 container create 579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:28:48 compute-0 podman[251430]: 2026-01-31 10:28:48.416715509 +0000 UTC m=+0.046959028 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 10:28:48 compute-0 systemd[1]: Started libpod-conmon-579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5.scope.
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.576 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.587 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855328.4164674, 6a307f4f-f019-4927-9da3-50e4a4aec6f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.588 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] VM Paused (Lifecycle Event)
Jan 31 10:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f5373fdc4d95d60256481113ed828e1bae3c6b0933ac8b976a36d7fa0e649b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.620 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.629 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:28:48 compute-0 podman[251430]: 2026-01-31 10:28:48.635643396 +0000 UTC m=+0.265886985 container init 579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 10:28:48 compute-0 podman[251430]: 2026-01-31 10:28:48.647456347 +0000 UTC m=+0.277699876 container start 579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 10:28:48 compute-0 nova_compute[185194]: 2026-01-31 10:28:48.674 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:28:48 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [NOTICE]   (251450) : New worker (251452) forked
Jan 31 10:28:48 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [NOTICE]   (251450) : Loading success.
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.064 185198 DEBUG nova.compute.manager [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.066 185198 DEBUG oslo_concurrency.lockutils [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.068 185198 DEBUG oslo_concurrency.lockutils [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.068 185198 DEBUG oslo_concurrency.lockutils [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.069 185198 DEBUG nova.compute.manager [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Processing event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.070 185198 DEBUG nova.compute.manager [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.071 185198 DEBUG oslo_concurrency.lockutils [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.071 185198 DEBUG oslo_concurrency.lockutils [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.072 185198 DEBUG oslo_concurrency.lockutils [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.073 185198 DEBUG nova.compute.manager [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] No waiting events found dispatching network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.074 185198 WARNING nova.compute.manager [req-f0c17ff1-64cb-4fc1-b2f0-3e756181c59d req-7e5a3f30-d6d6-498d-83b8-ca71a261904f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received unexpected event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 for instance with vm_state building and task_state spawning.
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.075 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.083 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855329.0824769, 6a307f4f-f019-4927-9da3-50e4a4aec6f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.083 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] VM Resumed (Lifecycle Event)
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.086 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.099 185198 INFO nova.virt.libvirt.driver [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Instance spawned successfully.
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.100 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.129 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.140 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.159 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.160 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.161 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.162 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.163 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.164 185198 DEBUG nova.virt.libvirt.driver [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.176 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.246 185198 INFO nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Took 11.93 seconds to spawn the instance on the hypervisor.
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.247 185198 DEBUG nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.329 185198 INFO nova.compute.manager [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Took 12.58 seconds to build instance.
Jan 31 10:28:49 compute-0 nova_compute[185194]: 2026-01-31 10:28:49.380 185198 DEBUG oslo_concurrency.lockutils [None req-df84eebe-8672-4e1c-81eb-446d961750b9 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.081 185198 DEBUG nova.compute.manager [req-c32f1084-8139-4c3a-9bd8-d395745a7a1e req-297b18b0-3f9e-47de-ba8d-cbc36b0f055c cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.084 185198 DEBUG oslo_concurrency.lockutils [req-c32f1084-8139-4c3a-9bd8-d395745a7a1e req-297b18b0-3f9e-47de-ba8d-cbc36b0f055c cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "03e83c48-a567-468c-84c7-335a02ea7439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.085 185198 DEBUG oslo_concurrency.lockutils [req-c32f1084-8139-4c3a-9bd8-d395745a7a1e req-297b18b0-3f9e-47de-ba8d-cbc36b0f055c cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.085 185198 DEBUG oslo_concurrency.lockutils [req-c32f1084-8139-4c3a-9bd8-d395745a7a1e req-297b18b0-3f9e-47de-ba8d-cbc36b0f055c cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.086 185198 DEBUG nova.compute.manager [req-c32f1084-8139-4c3a-9bd8-d395745a7a1e req-297b18b0-3f9e-47de-ba8d-cbc36b0f055c cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] No waiting events found dispatching network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.087 185198 WARNING nova.compute.manager [req-c32f1084-8139-4c3a-9bd8-d395745a7a1e req-297b18b0-3f9e-47de-ba8d-cbc36b0f055c cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received unexpected event network-vif-plugged-511d4760-4285-458e-a3eb-d3db966dc54c for instance with vm_state active and task_state deleting.
Jan 31 10:28:50 compute-0 ovn_controller[97627]: 2026-01-31T10:28:50Z|00084|binding|INFO|Releasing lport 27135e8d-d657-4300-a1f6-02bc2c0e93f0 from this chassis (sb_readonly=0)
Jan 31 10:28:50 compute-0 ovn_controller[97627]: 2026-01-31T10:28:50Z|00085|binding|INFO|Releasing lport ed331b8c-7632-42db-950b-31f3fda9d7ab from this chassis (sb_readonly=0)
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.231 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.412 185198 DEBUG nova.network.neutron [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Updated VIF entry in instance network info cache for port 448bb25f-ea93-4b1e-84eb-c57864b3a306. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.413 185198 DEBUG nova.network.neutron [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Updating instance_info_cache with network_info: [{"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.441 185198 DEBUG oslo_concurrency.lockutils [req-ffe88181-faa2-4393-bb15-193af2f41364 req-7ebeba41-8077-4aa6-bc20-41374eaeff54 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.624 185198 DEBUG nova.network.neutron [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.678 185198 INFO nova.compute.manager [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Took 3.37 seconds to deallocate network for instance.
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.737 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.739 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.917 185198 DEBUG nova.compute.provider_tree [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.948 185198 DEBUG nova.scheduler.client.report [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:28:50 compute-0 podman[251462]: 2026-01-31 10:28:50.986569424 +0000 UTC m=+0.096991391 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:28:50 compute-0 nova_compute[185194]: 2026-01-31 10:28:50.989 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:51 compute-0 nova_compute[185194]: 2026-01-31 10:28:51.038 185198 INFO nova.scheduler.client.report [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Deleted allocations for instance 03e83c48-a567-468c-84c7-335a02ea7439
Jan 31 10:28:51 compute-0 nova_compute[185194]: 2026-01-31 10:28:51.142 185198 DEBUG oslo_concurrency.lockutils [None req-1d46f66c-bacc-4173-b304-55151a9ccaf2 7569d7689502423193b6d841d1f880c0 146dcb55f281466fa2f94bac5029431f - - default default] Lock "03e83c48-a567-468c-84c7-335a02ea7439" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.207 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.225 185198 DEBUG nova.compute.manager [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Received event network-changed-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.225 185198 DEBUG nova.compute.manager [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Refreshing instance network info cache due to event network-changed-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.226 185198 DEBUG oslo_concurrency.lockutils [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.226 185198 DEBUG oslo_concurrency.lockutils [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.226 185198 DEBUG nova.network.neutron [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Refreshing network info cache for port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.573 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:52 compute-0 nova_compute[185194]: 2026-01-31 10:28:52.986 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:54 compute-0 nova_compute[185194]: 2026-01-31 10:28:54.950 185198 DEBUG nova.compute.manager [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-changed-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:54 compute-0 nova_compute[185194]: 2026-01-31 10:28:54.951 185198 DEBUG nova.compute.manager [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Refreshing instance network info cache due to event network-changed-448bb25f-ea93-4b1e-84eb-c57864b3a306. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:28:54 compute-0 nova_compute[185194]: 2026-01-31 10:28:54.952 185198 DEBUG oslo_concurrency.lockutils [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:28:54 compute-0 nova_compute[185194]: 2026-01-31 10:28:54.953 185198 DEBUG oslo_concurrency.lockutils [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:28:54 compute-0 nova_compute[185194]: 2026-01-31 10:28:54.953 185198 DEBUG nova.network.neutron [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Refreshing network info cache for port 448bb25f-ea93-4b1e-84eb-c57864b3a306 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:28:56 compute-0 podman[251489]: 2026-01-31 10:28:56.018031277 +0000 UTC m=+0.129216825 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1769056855, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-01-22T05:09:47Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Jan 31 10:28:56 compute-0 nova_compute[185194]: 2026-01-31 10:28:56.574 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:56 compute-0 nova_compute[185194]: 2026-01-31 10:28:56.997 185198 DEBUG nova.network.neutron [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updated VIF entry in instance network info cache for port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:28:56 compute-0 nova_compute[185194]: 2026-01-31 10:28:56.998 185198 DEBUG nova.network.neutron [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.032 185198 DEBUG oslo_concurrency.lockutils [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.033 185198 DEBUG nova.compute.manager [req-13db6062-4cd4-47fa-a6f9-8ab435c0e123 req-079fde10-4aa9-45d9-92a5-5cc6779282a4 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Received event network-vif-deleted-511d4760-4285-458e-a3eb-d3db966dc54c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.086 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.087 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.087 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.088 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.088 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.089 185198 INFO nova.compute.manager [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Terminating instance
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.090 185198 DEBUG nova.compute.manager [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 10:28:57 compute-0 kernel: tap448bb25f-ea (unregistering): left promiscuous mode
Jan 31 10:28:57 compute-0 NetworkManager[56281]: <info>  [1769855337.1326] device (tap448bb25f-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.141 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 ovn_controller[97627]: 2026-01-31T10:28:57Z|00086|binding|INFO|Releasing lport 448bb25f-ea93-4b1e-84eb-c57864b3a306 from this chassis (sb_readonly=0)
Jan 31 10:28:57 compute-0 ovn_controller[97627]: 2026-01-31T10:28:57Z|00087|binding|INFO|Setting lport 448bb25f-ea93-4b1e-84eb-c57864b3a306 down in Southbound
Jan 31 10:28:57 compute-0 ovn_controller[97627]: 2026-01-31T10:28:57Z|00088|binding|INFO|Removing iface tap448bb25f-ea ovn-installed in OVS
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.150 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.153 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:08:c8 10.100.0.8'], port_security=['fa:16:3e:42:08:c8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6a307f4f-f019-4927-9da3-50e4a4aec6f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e7c8ba4707564805be9a18429ec92962', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6c380d6-c866-43c6-8257-9acfaa43b3bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a82bc447-71d9-47d2-88bd-dee1db3c8666, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=448bb25f-ea93-4b1e-84eb-c57864b3a306) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.155 106883 INFO neutron.agent.ovn.metadata.agent [-] Port 448bb25f-ea93-4b1e-84eb-c57864b3a306 in datapath 48aa712a-2069-4774-86bb-f2b26b8d0de8 unbound from our chassis
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.157 106883 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48aa712a-2069-4774-86bb-f2b26b8d0de8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.159 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[c662792b-42fd-4b3c-b7fb-eb24b64f5fb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.160 106883 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8 namespace which is not needed anymore
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.168 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 31 10:28:57 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 8.980s CPU time.
Jan 31 10:28:57 compute-0 systemd-machined[156556]: Machine qemu-8-instance-00000008 terminated.
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.208 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 podman[251507]: 2026-01-31 10:28:57.229299654 +0000 UTC m=+0.076537617 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 10:28:57 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [NOTICE]   (251450) : haproxy version is 2.8.14-c23fe91
Jan 31 10:28:57 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [NOTICE]   (251450) : path to executable is /usr/sbin/haproxy
Jan 31 10:28:57 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [WARNING]  (251450) : Exiting Master process...
Jan 31 10:28:57 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [ALERT]    (251450) : Current worker (251452) exited with code 143 (Terminated)
Jan 31 10:28:57 compute-0 neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8[251446]: [WARNING]  (251450) : All workers exited. Exiting... (0)
Jan 31 10:28:57 compute-0 systemd[1]: libpod-579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5.scope: Deactivated successfully.
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.315 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 podman[251546]: 2026-01-31 10:28:57.311028429 +0000 UTC m=+0.059325023 container died 579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.325 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.357 185198 INFO nova.virt.libvirt.driver [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Instance destroyed successfully.
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.358 185198 DEBUG nova.objects.instance [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lazy-loading 'resources' on Instance uuid 6a307f4f-f019-4927-9da3-50e4a4aec6f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:28:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5-userdata-shm.mount: Deactivated successfully.
Jan 31 10:28:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0f5373fdc4d95d60256481113ed828e1bae3c6b0933ac8b976a36d7fa0e649b-merged.mount: Deactivated successfully.
Jan 31 10:28:57 compute-0 podman[251546]: 2026-01-31 10:28:57.377527208 +0000 UTC m=+0.125823772 container cleanup 579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.381 185198 DEBUG nova.virt.libvirt.vif [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T10:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1443151018',display_name='tempest-ServersTestJSON-server-1443151018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1443151018',id=8,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNMPEIYmZ47TZ0+6Z2Dj6lZp2i6ERU696xuqHSNUsBN1HYIE0fiZpFZOgID4XilbfdoVoPKjrrY+lu05Af7+rIY8o/UWo7Hp53D9NHlYYqr9CCGsxTfITJ1Wdyk+SHSxzA==',key_name='tempest-keypair-1612973930',keypairs=<?>,launch_index=0,launched_at=2026-01-31T10:28:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e7c8ba4707564805be9a18429ec92962',ramdisk_id='',reservation_id='r-h8yw8iws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1356590764',owner_user_name='tempest-ServersTestJSON-1356590764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T10:28:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8cec4e2f60d242508ce87cc6af1eea13',uuid=6a307f4f-f019-4927-9da3-50e4a4aec6f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.383 185198 DEBUG nova.network.os_vif_util [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Converting VIF {"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.384 185198 DEBUG nova.network.os_vif_util [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:28:57 compute-0 systemd[1]: libpod-conmon-579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5.scope: Deactivated successfully.
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.386 185198 DEBUG os_vif [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.391 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.392 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap448bb25f-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.395 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.401 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.403 185198 INFO os_vif [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:08:c8,bridge_name='br-int',has_traffic_filtering=True,id=448bb25f-ea93-4b1e-84eb-c57864b3a306,network=Network(48aa712a-2069-4774-86bb-f2b26b8d0de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448bb25f-ea')
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.404 185198 INFO nova.virt.libvirt.driver [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Deleting instance files /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3_del
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.405 185198 INFO nova.virt.libvirt.driver [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Deletion of /var/lib/nova/instances/6a307f4f-f019-4927-9da3-50e4a4aec6f3_del complete
Jan 31 10:28:57 compute-0 podman[251590]: 2026-01-31 10:28:57.454462604 +0000 UTC m=+0.059934278 container remove 579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.460 185198 INFO nova.compute.manager [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.461 185198 DEBUG oslo.service.loopingcall [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.461 185198 DEBUG nova.compute.manager [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.461 185198 DEBUG nova.network.neutron [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.460 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[71f29eba-11ef-46b9-afd0-af6108ea2f8a]: (4, ('Sat Jan 31 10:28:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8 (579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5)\n579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5\nSat Jan 31 10:28:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8 (579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5)\n579c4d1b5605d620924eba4f24b0c5f67c441306d2902306d762538c223cc2f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.465 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[700e9c0c-c22b-405d-8092-89d5755e3543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.467 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48aa712a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.469 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 kernel: tap48aa712a-20: left promiscuous mode
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.482 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.488 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.489 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[2f79ed1a-67b2-4728-83d0-c0f64066c7d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.503 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4ba69b-c151-4416-876a-950dfc92d18a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.505 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[0f000a98-4927-499d-b91e-ae0162ad23e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.522 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[94d98f76-e078-42ef-bda9-bc2ffa4787ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554906, 'reachable_time': 20503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251604, 'error': None, 'target': 'ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d48aa712a\x2d2069\x2d4774\x2d86bb\x2df2b26b8d0de8.mount: Deactivated successfully.
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.526 107396 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48aa712a-2069-4774-86bb-f2b26b8d0de8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 10:28:57 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:28:57.526 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[c63d4ccf-2c76-491b-9a2a-8c21d1125fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.575 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.768 185198 DEBUG nova.compute.manager [req-d99e9afd-7b40-43dc-ae22-dc7db5ae2af7 req-3ce85f51-9039-48f5-861f-b2ad0e97d1e9 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-vif-unplugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.769 185198 DEBUG oslo_concurrency.lockutils [req-d99e9afd-7b40-43dc-ae22-dc7db5ae2af7 req-3ce85f51-9039-48f5-861f-b2ad0e97d1e9 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.770 185198 DEBUG oslo_concurrency.lockutils [req-d99e9afd-7b40-43dc-ae22-dc7db5ae2af7 req-3ce85f51-9039-48f5-861f-b2ad0e97d1e9 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.771 185198 DEBUG oslo_concurrency.lockutils [req-d99e9afd-7b40-43dc-ae22-dc7db5ae2af7 req-3ce85f51-9039-48f5-861f-b2ad0e97d1e9 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.773 185198 DEBUG nova.compute.manager [req-d99e9afd-7b40-43dc-ae22-dc7db5ae2af7 req-3ce85f51-9039-48f5-861f-b2ad0e97d1e9 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] No waiting events found dispatching network-vif-unplugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:57 compute-0 nova_compute[185194]: 2026-01-31 10:28:57.774 185198 DEBUG nova.compute.manager [req-d99e9afd-7b40-43dc-ae22-dc7db5ae2af7 req-3ce85f51-9039-48f5-861f-b2ad0e97d1e9 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-vif-unplugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 10:28:58 compute-0 ovn_controller[97627]: 2026-01-31T10:28:58Z|00089|binding|INFO|Releasing lport 27135e8d-d657-4300-a1f6-02bc2c0e93f0 from this chassis (sb_readonly=0)
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.005 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.206 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:28:59 compute-0 podman[201068]: time="2026-01-31T10:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:28:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:28:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.813 185198 DEBUG nova.network.neutron [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Updated VIF entry in instance network info cache for port 448bb25f-ea93-4b1e-84eb-c57864b3a306. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.814 185198 DEBUG nova.network.neutron [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Updating instance_info_cache with network_info: [{"id": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "address": "fa:16:3e:42:08:c8", "network": {"id": "48aa712a-2069-4774-86bb-f2b26b8d0de8", "bridge": "br-int", "label": "tempest-ServersTestJSON-671322791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e7c8ba4707564805be9a18429ec92962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448bb25f-ea", "ovs_interfaceid": "448bb25f-ea93-4b1e-84eb-c57864b3a306", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.893 185198 DEBUG oslo_concurrency.lockutils [req-99d4171e-2cce-48de-9d88-828533702aad req-41af194f-b455-4110-b7d1-177795a7d541 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-6a307f4f-f019-4927-9da3-50e4a4aec6f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.903 185198 DEBUG nova.compute.manager [req-a6dec831-748d-43f6-9d00-b699a99b0f6b req-cb155fa9-b2e7-44dd-8ac6-44d5753a269f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.904 185198 DEBUG oslo_concurrency.lockutils [req-a6dec831-748d-43f6-9d00-b699a99b0f6b req-cb155fa9-b2e7-44dd-8ac6-44d5753a269f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.904 185198 DEBUG oslo_concurrency.lockutils [req-a6dec831-748d-43f6-9d00-b699a99b0f6b req-cb155fa9-b2e7-44dd-8ac6-44d5753a269f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.905 185198 DEBUG oslo_concurrency.lockutils [req-a6dec831-748d-43f6-9d00-b699a99b0f6b req-cb155fa9-b2e7-44dd-8ac6-44d5753a269f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.905 185198 DEBUG nova.compute.manager [req-a6dec831-748d-43f6-9d00-b699a99b0f6b req-cb155fa9-b2e7-44dd-8ac6-44d5753a269f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] No waiting events found dispatching network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 10:28:59 compute-0 nova_compute[185194]: 2026-01-31 10:28:59.906 185198 WARNING nova.compute.manager [req-a6dec831-748d-43f6-9d00-b699a99b0f6b req-cb155fa9-b2e7-44dd-8ac6-44d5753a269f cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received unexpected event network-vif-plugged-448bb25f-ea93-4b1e-84eb-c57864b3a306 for instance with vm_state active and task_state deleting.
Jan 31 10:29:01 compute-0 openstack_network_exporter[204162]: ERROR   10:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:29:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:29:01 compute-0 openstack_network_exporter[204162]: ERROR   10:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:29:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:29:02 compute-0 ovn_controller[97627]: 2026-01-31T10:29:02Z|00090|binding|INFO|Releasing lport 27135e8d-d657-4300-a1f6-02bc2c0e93f0 from this chassis (sb_readonly=0)
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.088 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.158 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769855327.15716, 03e83c48-a567-468c-84c7-335a02ea7439 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.158 185198 INFO nova.compute.manager [-] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] VM Stopped (Lifecycle Event)
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.194 185198 DEBUG nova.compute.manager [None req-3ade6a97-2464-4249-9aed-751b7615deca - - - - - -] [instance: 03e83c48-a567-468c-84c7-335a02ea7439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.279 185198 DEBUG nova.network.neutron [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.302 185198 INFO nova.compute.manager [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Took 4.84 seconds to deallocate network for instance.
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.354 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.355 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.395 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.432 185198 DEBUG nova.compute.manager [req-9079386a-c5ff-4bc1-ad1a-0ffe91923643 req-9572f9f3-88ae-4c46-8102-16ba8f60b0f5 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Received event network-vif-deleted-448bb25f-ea93-4b1e-84eb-c57864b3a306 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.551 185198 DEBUG nova.compute.provider_tree [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.580 185198 DEBUG nova.scheduler.client.report [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.591 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.631 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.671 185198 INFO nova.scheduler.client.report [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Deleted allocations for instance 6a307f4f-f019-4927-9da3-50e4a4aec6f3
Jan 31 10:29:02 compute-0 nova_compute[185194]: 2026-01-31 10:29:02.757 185198 DEBUG oslo_concurrency.lockutils [None req-1ba7b1dc-8790-4c86-8c2e-376a17bed08e 8cec4e2f60d242508ce87cc6af1eea13 e7c8ba4707564805be9a18429ec92962 - - default default] Lock "6a307f4f-f019-4927-9da3-50e4a4aec6f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:03 compute-0 nova_compute[185194]: 2026-01-31 10:29:03.669 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:04 compute-0 nova_compute[185194]: 2026-01-31 10:29:04.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:04 compute-0 nova_compute[185194]: 2026-01-31 10:29:04.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:29:04 compute-0 nova_compute[185194]: 2026-01-31 10:29:04.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:29:05 compute-0 podman[251606]: 2026-01-31 10:29:05.003020513 +0000 UTC m=+0.120429150 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.build-date=20260126, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 10:29:05 compute-0 podman[251605]: 2026-01-31 10:29:05.016057014 +0000 UTC m=+0.134474766 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 10:29:05 compute-0 nova_compute[185194]: 2026-01-31 10:29:05.571 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:29:05 compute-0 nova_compute[185194]: 2026-01-31 10:29:05.572 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:29:05 compute-0 nova_compute[185194]: 2026-01-31 10:29:05.572 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:29:05 compute-0 nova_compute[185194]: 2026-01-31 10:29:05.572 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c7f9a83-17b3-4e0f-8936-9e6a19920064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:29:07 compute-0 nova_compute[185194]: 2026-01-31 10:29:07.118 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:07 compute-0 nova_compute[185194]: 2026-01-31 10:29:07.397 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:07 compute-0 nova_compute[185194]: 2026-01-31 10:29:07.582 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:08 compute-0 podman[251648]: 2026-01-31 10:29:08.97012299 +0000 UTC m=+0.090667156 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.096 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.134 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.135 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.136 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.136 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.137 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.137 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:10 compute-0 nova_compute[185194]: 2026-01-31 10:29:10.137 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:29:11 compute-0 nova_compute[185194]: 2026-01-31 10:29:11.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:11 compute-0 nova_compute[185194]: 2026-01-31 10:29:11.607 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:11 compute-0 podman[251674]: 2026-01-31 10:29:11.978089515 +0000 UTC m=+0.091833275 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 10:29:12 compute-0 podman[251673]: 2026-01-31 10:29:12.001964324 +0000 UTC m=+0.122950142 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vcs-type=git, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, config_id=kepler, version=9.4, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:29:12 compute-0 nova_compute[185194]: 2026-01-31 10:29:12.349 185198 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769855337.3481631, 6a307f4f-f019-4927-9da3-50e4a4aec6f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:29:12 compute-0 nova_compute[185194]: 2026-01-31 10:29:12.351 185198 INFO nova.compute.manager [-] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] VM Stopped (Lifecycle Event)
Jan 31 10:29:12 compute-0 nova_compute[185194]: 2026-01-31 10:29:12.400 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:12 compute-0 nova_compute[185194]: 2026-01-31 10:29:12.500 185198 DEBUG nova.compute.manager [None req-4b9ebed0-c52c-409f-b58c-b776043b3b75 - - - - - -] [instance: 6a307f4f-f019-4927-9da3-50e4a4aec6f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:29:12 compute-0 nova_compute[185194]: 2026-01-31 10:29:12.529 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:12 compute-0 nova_compute[185194]: 2026-01-31 10:29:12.585 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:13 compute-0 nova_compute[185194]: 2026-01-31 10:29:13.279 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:14 compute-0 ovn_controller[97627]: 2026-01-31T10:29:14Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:66:1f 10.100.0.8
Jan 31 10:29:14 compute-0 ovn_controller[97627]: 2026-01-31T10:29:14Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:66:1f 10.100.0.8
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.650 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.651 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.652 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.652 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.836 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.934 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:15 compute-0 nova_compute[185194]: 2026-01-31 10:29:15.936 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.025 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:16.459 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:16.460 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:16.461 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.547 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.549 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=72.33081817626953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.550 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.550 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.981 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 2c7f9a83-17b3-4e0f-8936-9e6a19920064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.982 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:29:16 compute-0 nova_compute[185194]: 2026-01-31 10:29:16.982 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.006 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing inventories for resource provider 1f8a458f-baaf-434f-841c-59d735622205 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.035 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating ProviderTree inventory for provider 1f8a458f-baaf-434f-841c-59d735622205 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.036 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Updating inventory in ProviderTree for provider 1f8a458f-baaf-434f-841c-59d735622205 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.058 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing aggregate associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.081 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Refreshing trait associations for resource provider 1f8a458f-baaf-434f-841c-59d735622205, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_F16C,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.405 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.459 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.487 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.588 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.688 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:29:17 compute-0 nova_compute[185194]: 2026-01-31 10:29:17.689 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:18 compute-0 ovn_controller[97627]: 2026-01-31T10:29:18Z|00091|binding|INFO|Releasing lport 27135e8d-d657-4300-a1f6-02bc2c0e93f0 from this chassis (sb_readonly=0)
Jan 31 10:29:18 compute-0 nova_compute[185194]: 2026-01-31 10:29:18.753 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.201 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.202 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.236 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.366 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.367 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.388 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.389 185198 INFO nova.compute.claims [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Claim successful on node compute-0.ctlplane.example.com
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.609 185198 DEBUG nova.compute.provider_tree [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.641 185198 DEBUG nova.scheduler.client.report [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.666 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.667 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.740 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.741 185198 DEBUG nova.network.neutron [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.901 185198 INFO nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 10:29:19 compute-0 nova_compute[185194]: 2026-01-31 10:29:19.928 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.068 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.070 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.071 185198 INFO nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Creating image(s)
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.072 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "/var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.072 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "/var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.073 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "/var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.099 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.184 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.186 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "182dd0a237ed06a0f4beb35bec448249e2991750" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.189 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.216 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.301 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.303 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.361 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750,backing_fmt=raw /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.363 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "182dd0a237ed06a0f4beb35bec448249e2991750" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.363 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.440 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.441 185198 DEBUG nova.virt.disk.api [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Checking if we can resize image /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.442 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.532 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.534 185198 DEBUG nova.virt.disk.api [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Cannot resize image /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.534 185198 DEBUG nova.objects.instance [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lazy-loading 'migration_context' on Instance uuid a0208ed9-3ae5-4499-81e2-3f2cb621f74d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.555 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.555 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Ensure instance console log exists: /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.556 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.557 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.557 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:20 compute-0 nova_compute[185194]: 2026-01-31 10:29:20.597 185198 DEBUG nova.policy [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e738cdb50d649b9968d2e1cdea3b9cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cbe14d6b97b64029ae17f2f239669a6f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 10:29:22 compute-0 podman[251746]: 2026-01-31 10:29:22.012100689 +0000 UTC m=+0.126674274 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 31 10:29:22 compute-0 nova_compute[185194]: 2026-01-31 10:29:22.233 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:22 compute-0 nova_compute[185194]: 2026-01-31 10:29:22.409 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:22 compute-0 nova_compute[185194]: 2026-01-31 10:29:22.590 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:24 compute-0 nova_compute[185194]: 2026-01-31 10:29:24.273 185198 DEBUG nova.network.neutron [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Successfully created port: 56ca1581-424b-4d3f-8fdb-91c955b6ada3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 10:29:25 compute-0 nova_compute[185194]: 2026-01-31 10:29:25.873 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:26 compute-0 nova_compute[185194]: 2026-01-31 10:29:26.029 185198 DEBUG nova.objects.instance [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Lazy-loading 'flavor' on Instance uuid 2c7f9a83-17b3-4e0f-8936-9e6a19920064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:29:26 compute-0 nova_compute[185194]: 2026-01-31 10:29:26.143 185198 DEBUG oslo_concurrency.lockutils [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:29:26 compute-0 nova_compute[185194]: 2026-01-31 10:29:26.144 185198 DEBUG oslo_concurrency.lockutils [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:29:27 compute-0 podman[251770]: 2026-01-31 10:29:27.015520374 +0000 UTC m=+0.125084445 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_id=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-01-22T05:09:47Z, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.buildah.version=1.33.7, release=1769056855, build-date=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 31 10:29:27 compute-0 nova_compute[185194]: 2026-01-31 10:29:27.413 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:27 compute-0 nova_compute[185194]: 2026-01-31 10:29:27.593 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:27 compute-0 nova_compute[185194]: 2026-01-31 10:29:27.974 185198 DEBUG nova.network.neutron [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:29:27 compute-0 podman[251792]: 2026-01-31 10:29:27.974728438 +0000 UTC m=+0.094899561 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.315 185198 DEBUG nova.network.neutron [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Successfully updated port: 56ca1581-424b-4d3f-8fdb-91c955b6ada3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.430 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "refresh_cache-a0208ed9-3ae5-4499-81e2-3f2cb621f74d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.431 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquired lock "refresh_cache-a0208ed9-3ae5-4499-81e2-3f2cb621f74d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.431 185198 DEBUG nova.network.neutron [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.606 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.607 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.607 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.607 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.607 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.608 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.710 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.718 185198 DEBUG nova.compute.manager [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Received event network-changed-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.718 185198 DEBUG nova.compute.manager [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Refreshing instance network info cache due to event network-changed-b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.719 185198 DEBUG oslo_concurrency.lockutils [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.728 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.728 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Image id 5f1c614c-1ba8-4e34-915f-7078c46805eb yields fingerprint 182dd0a237ed06a0f4beb35bec448249e2991750 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.728 185198 INFO nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] image 5f1c614c-1ba8-4e34-915f-7078c46805eb at (/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750): checking
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.729 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] image 5f1c614c-1ba8-4e34-915f-7078c46805eb at (/var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.731 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.732 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] 2c7f9a83-17b3-4e0f-8936-9e6a19920064 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.732 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] 2c7f9a83-17b3-4e0f-8936-9e6a19920064 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.733 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.789 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.790 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 2c7f9a83-17b3-4e0f-8936-9e6a19920064 is backed by 182dd0a237ed06a0f4beb35bec448249e2991750 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.790 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] a0208ed9-3ae5-4499-81e2-3f2cb621f74d is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.790 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] a0208ed9-3ae5-4499-81e2-3f2cb621f74d has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.790 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.819 185198 DEBUG nova.compute.manager [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Received event network-changed-56ca1581-424b-4d3f-8fdb-91c955b6ada3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.820 185198 DEBUG nova.compute.manager [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Refreshing instance network info cache due to event network-changed-56ca1581-424b-4d3f-8fdb-91c955b6ada3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.820 185198 DEBUG oslo_concurrency.lockutils [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "refresh_cache-a0208ed9-3ae5-4499-81e2-3f2cb621f74d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.840 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.841 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d is backed by 182dd0a237ed06a0f4beb35bec448249e2991750 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.841 185198 WARNING nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.841 185198 WARNING nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.842 185198 INFO nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Active base files: /var/lib/nova/instances/_base/182dd0a237ed06a0f4beb35bec448249e2991750
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.842 185198 INFO nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.842 185198 INFO nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9ecb3a2b03b938a57c4ca8ac773f7bf22d2bae0d
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.843 185198 INFO nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/77bca481b205ef365f4321b105547570655fdb07
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.843 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.843 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.843 185198 DEBUG nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 31 10:29:28 compute-0 nova_compute[185194]: 2026-01-31 10:29:28.843 185198 INFO nova.virt.libvirt.imagecache [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 31 10:29:29 compute-0 nova_compute[185194]: 2026-01-31 10:29:29.008 185198 DEBUG nova.network.neutron [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 10:29:29 compute-0 podman[201068]: time="2026-01-31T10:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:29:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28702 "" "Go-http-client/1.1"
Jan 31 10:29:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4384 "" "Go-http-client/1.1"
Jan 31 10:29:30 compute-0 nova_compute[185194]: 2026-01-31 10:29:30.793 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:31 compute-0 openstack_network_exporter[204162]: ERROR   10:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:29:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:29:31 compute-0 openstack_network_exporter[204162]: ERROR   10:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:29:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:29:32 compute-0 nova_compute[185194]: 2026-01-31 10:29:32.417 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:32 compute-0 nova_compute[185194]: 2026-01-31 10:29:32.597 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:33 compute-0 nova_compute[185194]: 2026-01-31 10:29:33.945 185198 DEBUG nova.network.neutron [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Updating instance_info_cache with network_info: [{"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:29:34 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:34.693 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:29:34 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:34.695 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.695 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:34 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:34.696 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.729 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Releasing lock "refresh_cache-a0208ed9-3ae5-4499-81e2-3f2cb621f74d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.729 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Instance network_info: |[{"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.729 185198 DEBUG oslo_concurrency.lockutils [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-a0208ed9-3ae5-4499-81e2-3f2cb621f74d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.730 185198 DEBUG nova.network.neutron [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Refreshing network info cache for port 56ca1581-424b-4d3f-8fdb-91c955b6ada3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.732 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Start _get_guest_xml network_info=[{"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'device_type': 'disk', 'guest_format': None, 'image_id': '5f1c614c-1ba8-4e34-915f-7078c46805eb'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.743 185198 WARNING nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.889 185198 DEBUG nova.network.neutron [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.928 185198 DEBUG nova.virt.libvirt.host [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.929 185198 DEBUG nova.virt.libvirt.host [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.936 185198 DEBUG nova.virt.libvirt.host [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.937 185198 DEBUG nova.virt.libvirt.host [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.938 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.938 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T10:27:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f3dcbeb5-bd7a-436b-a0c1-9d20fb387210',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T10:27:35Z,direct_url=<?>,disk_format='qcow2',id=5f1c614c-1ba8-4e34-915f-7078c46805eb,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='155389cbed6644acacdbeeb6155adb54',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T10:27:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.939 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.940 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.941 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.941 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.942 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.943 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.943 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.944 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.944 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.945 185198 DEBUG nova.virt.hardware [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.951 185198 DEBUG nova.virt.libvirt.vif [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1366047794',display_name='tempest-TestNetworkBasicOps-server-1366047794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1366047794',id=9,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAC6zFij6hdbVTVJDFIvBpr0G9j0UvlgJJbPRKYmbIN6JpbhCnDSsH6dBLsw6NWpaGzvhVhQvjhGEP8mVxQ5rZnizKvA9MS6usFFQbqmsX2VpYujeNOp5BtC1EBXj3OYWg==',key_name='tempest-TestNetworkBasicOps-372007442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbe14d6b97b64029ae17f2f239669a6f',ramdisk_id='',reservation_id='r-10cqn9tl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1682954337',owner_user_name='tempest-TestNetworkBasicOps-1682954337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:29:19Z,user_data=None,user_id='0e738cdb50d649b9968d2e1cdea3b9cb',uuid=a0208ed9-3ae5-4499-81e2-3f2cb621f74d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.952 185198 DEBUG nova.network.os_vif_util [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Converting VIF {"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.954 185198 DEBUG nova.network.os_vif_util [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:d0:f6,bridge_name='br-int',has_traffic_filtering=True,id=56ca1581-424b-4d3f-8fdb-91c955b6ada3,network=Network(daff63ed-ea84-45dc-9745-1af596db4581),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56ca1581-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.955 185198 DEBUG nova.objects.instance [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lazy-loading 'pci_devices' on Instance uuid a0208ed9-3ae5-4499-81e2-3f2cb621f74d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.958 185198 DEBUG oslo_concurrency.lockutils [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.958 185198 DEBUG nova.compute.manager [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.959 185198 DEBUG nova.compute.manager [None req-81a521d2-0890-4c4b-8709-4e23c280fe77 2205cfa3b87343af99106de256070375 904e48d9dedd4c41a51e9b18681b22c2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] network_info to inject: |[{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.962 185198 DEBUG oslo_concurrency.lockutils [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:29:34 compute-0 nova_compute[185194]: 2026-01-31 10:29:34.962 185198 DEBUG nova.network.neutron [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Refreshing network info cache for port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.015 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <uuid>a0208ed9-3ae5-4499-81e2-3f2cb621f74d</uuid>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <name>instance-00000009</name>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <memory>131072</memory>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <vcpu>1</vcpu>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <metadata>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:name>tempest-TestNetworkBasicOps-server-1366047794</nova:name>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:creationTime>2026-01-31 10:29:34</nova:creationTime>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:flavor name="m1.nano">
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:memory>128</nova:memory>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:disk>1</nova:disk>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:swap>0</nova:swap>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:vcpus>1</nova:vcpus>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       </nova:flavor>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:owner>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:user uuid="0e738cdb50d649b9968d2e1cdea3b9cb">tempest-TestNetworkBasicOps-1682954337-project-member</nova:user>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:project uuid="cbe14d6b97b64029ae17f2f239669a6f">tempest-TestNetworkBasicOps-1682954337</nova:project>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       </nova:owner>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:root type="image" uuid="5f1c614c-1ba8-4e34-915f-7078c46805eb"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <nova:ports>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         <nova:port uuid="56ca1581-424b-4d3f-8fdb-91c955b6ada3">
Jan 31 10:29:35 compute-0 nova_compute[185194]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:         </nova:port>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       </nova:ports>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </nova:instance>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </metadata>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <sysinfo type="smbios">
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <system>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <entry name="manufacturer">RDO</entry>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <entry name="product">OpenStack Compute</entry>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <entry name="serial">a0208ed9-3ae5-4499-81e2-3f2cb621f74d</entry>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <entry name="uuid">a0208ed9-3ae5-4499-81e2-3f2cb621f74d</entry>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <entry name="family">Virtual Machine</entry>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </system>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </sysinfo>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <os>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <boot dev="hd"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <smbios mode="sysinfo"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </os>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <features>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <acpi/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <apic/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <vmcoreinfo/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </features>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <clock offset="utc">
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <timer name="hpet" present="no"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </clock>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <cpu mode="host-model" match="exact">
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </cpu>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   <devices>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <disk type="file" device="disk">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <target dev="vda" bus="virtio"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <disk type="file" device="cdrom">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <driver name="qemu" type="raw" cache="none"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <source file="/var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.config"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <target dev="sda" bus="sata"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </disk>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <interface type="ethernet">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <mac address="fa:16:3e:14:d0:f6"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <mtu size="1442"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <target dev="tap56ca1581-42"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </interface>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <serial type="pty">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <log file="/var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/console.log" append="off"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </serial>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <video>
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <model type="virtio"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </video>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <input type="tablet" bus="usb"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <rng model="virtio">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <backend model="random">/dev/urandom</backend>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </rng>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <controller type="usb" index="0"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     <memballoon model="virtio">
Jan 31 10:29:35 compute-0 nova_compute[185194]:       <stats period="10"/>
Jan 31 10:29:35 compute-0 nova_compute[185194]:     </memballoon>
Jan 31 10:29:35 compute-0 nova_compute[185194]:   </devices>
Jan 31 10:29:35 compute-0 nova_compute[185194]: </domain>
Jan 31 10:29:35 compute-0 nova_compute[185194]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.017 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Preparing to wait for external event network-vif-plugged-56ca1581-424b-4d3f-8fdb-91c955b6ada3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.018 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Acquiring lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.018 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.019 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.020 185198 DEBUG nova.virt.libvirt.vif [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T10:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1366047794',display_name='tempest-TestNetworkBasicOps-server-1366047794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1366047794',id=9,image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAC6zFij6hdbVTVJDFIvBpr0G9j0UvlgJJbPRKYmbIN6JpbhCnDSsH6dBLsw6NWpaGzvhVhQvjhGEP8mVxQ5rZnizKvA9MS6usFFQbqmsX2VpYujeNOp5BtC1EBXj3OYWg==',key_name='tempest-TestNetworkBasicOps-372007442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbe14d6b97b64029ae17f2f239669a6f',ramdisk_id='',reservation_id='r-10cqn9tl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5f1c614c-1ba8-4e34-915f-7078c46805eb',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1682954337',owner_user_name='tempest-TestNetworkBasicOps-1682954337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T10:29:19Z,user_data=None,user_id='0e738cdb50d649b9968d2e1cdea3b9cb',uuid=a0208ed9-3ae5-4499-81e2-3f2cb621f74d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.020 185198 DEBUG nova.network.os_vif_util [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Converting VIF {"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.021 185198 DEBUG nova.network.os_vif_util [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:d0:f6,bridge_name='br-int',has_traffic_filtering=True,id=56ca1581-424b-4d3f-8fdb-91c955b6ada3,network=Network(daff63ed-ea84-45dc-9745-1af596db4581),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56ca1581-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.022 185198 DEBUG os_vif [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:d0:f6,bridge_name='br-int',has_traffic_filtering=True,id=56ca1581-424b-4d3f-8fdb-91c955b6ada3,network=Network(daff63ed-ea84-45dc-9745-1af596db4581),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56ca1581-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.022 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.023 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.023 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.027 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.027 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ca1581-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.028 185198 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56ca1581-42, col_values=(('external_ids', {'iface-id': '56ca1581-424b-4d3f-8fdb-91c955b6ada3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:d0:f6', 'vm-uuid': 'a0208ed9-3ae5-4499-81e2-3f2cb621f74d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.029 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:35 compute-0 NetworkManager[56281]: <info>  [1769855375.0313] manager: (tap56ca1581-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.031 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.041 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.043 185198 INFO os_vif [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:d0:f6,bridge_name='br-int',has_traffic_filtering=True,id=56ca1581-424b-4d3f-8fdb-91c955b6ada3,network=Network(daff63ed-ea84-45dc-9745-1af596db4581),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56ca1581-42')
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.350 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.351 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.351 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] No VIF found with MAC fa:16:3e:14:d0:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 10:29:35 compute-0 nova_compute[185194]: 2026-01-31 10:29:35.352 185198 INFO nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Using config drive
Jan 31 10:29:35 compute-0 podman[251818]: 2026-01-31 10:29:35.981047372 +0000 UTC m=+0.092839199 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 10:29:36 compute-0 podman[251817]: 2026-01-31 10:29:36.002202814 +0000 UTC m=+0.124056149 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 10:29:37 compute-0 nova_compute[185194]: 2026-01-31 10:29:37.599 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:38 compute-0 nova_compute[185194]: 2026-01-31 10:29:38.816 185198 INFO nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Creating config drive at /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.config
Jan 31 10:29:38 compute-0 nova_compute[185194]: 2026-01-31 10:29:38.822 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb9jgu8_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:29:38 compute-0 nova_compute[185194]: 2026-01-31 10:29:38.954 185198 DEBUG oslo_concurrency.processutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb9jgu8_s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:29:39 compute-0 kernel: tap56ca1581-42: entered promiscuous mode
Jan 31 10:29:39 compute-0 NetworkManager[56281]: <info>  [1769855379.0308] manager: (tap56ca1581-42): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.033 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 ovn_controller[97627]: 2026-01-31T10:29:39Z|00092|binding|INFO|Claiming lport 56ca1581-424b-4d3f-8fdb-91c955b6ada3 for this chassis.
Jan 31 10:29:39 compute-0 ovn_controller[97627]: 2026-01-31T10:29:39Z|00093|binding|INFO|56ca1581-424b-4d3f-8fdb-91c955b6ada3: Claiming fa:16:3e:14:d0:f6 10.100.0.13
Jan 31 10:29:39 compute-0 ovn_controller[97627]: 2026-01-31T10:29:39Z|00094|binding|INFO|Setting lport 56ca1581-424b-4d3f-8fdb-91c955b6ada3 ovn-installed in OVS
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.044 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.047 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 systemd-udevd[251890]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:29:39 compute-0 NetworkManager[56281]: <info>  [1769855379.1000] device (tap56ca1581-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 10:29:39 compute-0 NetworkManager[56281]: <info>  [1769855379.1074] device (tap56ca1581-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 10:29:39 compute-0 podman[251874]: 2026-01-31 10:29:39.129190549 +0000 UTC m=+0.101510903 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:29:39 compute-0 ovn_controller[97627]: 2026-01-31T10:29:39Z|00095|binding|INFO|Setting lport 56ca1581-424b-4d3f-8fdb-91c955b6ada3 up in Southbound
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.144 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:d0:f6 10.100.0.13'], port_security=['fa:16:3e:14:d0:f6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a0208ed9-3ae5-4499-81e2-3f2cb621f74d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daff63ed-ea84-45dc-9745-1af596db4581', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbe14d6b97b64029ae17f2f239669a6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73d1668-a42d-4352-80a5-daeec6e1506d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab84a893-ac5b-45a2-b186-21c420272d94, chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f82ea581640>], logical_port=56ca1581-424b-4d3f-8fdb-91c955b6ada3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.146 106883 INFO neutron.agent.ovn.metadata.agent [-] Port 56ca1581-424b-4d3f-8fdb-91c955b6ada3 in datapath daff63ed-ea84-45dc-9745-1af596db4581 bound to our chassis
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.147 106883 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daff63ed-ea84-45dc-9745-1af596db4581
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.162 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[a225453b-6050-499a-9e3e-16a9b8fbf984]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.162 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdaff63ed-e1 in ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.166 238337 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdaff63ed-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.166 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[a8382bba-999b-4eea-a916-d0b626b663a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.168 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f5734b78-05f1-4dd2-946a-89d63ddd638c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.180 107396 DEBUG oslo.privsep.daemon [-] privsep: reply[96efd1a6-e2ba-4702-8b84-21c3a29a4c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.208 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f824ce1c-b6ee-4218-aa2e-210299800755]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.231 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[5443de25-7f68-40c4-a6ea-f987676fff41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 systemd-udevd[251895]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.238 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[e74246b5-cebd-4a02-8a09-9fc8bf73af36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 NetworkManager[56281]: <info>  [1769855379.2395] manager: (tapdaff63ed-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.254 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[82cd45ee-ecf2-47ea-b8f9-87c03b54d31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.259 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d4c9b2-6543-40f5-b339-67616fde2c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 NetworkManager[56281]: <info>  [1769855379.2799] device (tapdaff63ed-e0): carrier: link connected
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.286 238370 DEBUG oslo.privsep.daemon [-] privsep: reply[46adc104-fd9a-41e5-aad1-6d5b703dcdcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.303 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecdc530-0921-4451-ab3b-9aee8bf50b9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaff63ed-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:ef:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560077, 'reachable_time': 39544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251930, 'error': None, 'target': 'ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.319 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[662adb54-4ad1-433c-8d56-6933047ba481]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:ef0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560077, 'tstamp': 560077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251931, 'error': None, 'target': 'ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.338 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3437c276-905c-4776-bc3b-80109ecc00d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaff63ed-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:ef:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560077, 'reachable_time': 39544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251932, 'error': None, 'target': 'ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.369 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[97fe71d7-ac85-41fd-b4cf-6addd63ee882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.428 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ac5e9c-76e7-4438-994d-feb2bc68e1a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.431 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaff63ed-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:39 compute-0 systemd-machined[156556]: New machine qemu-9-instance-00000009.
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.432 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.435 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaff63ed-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:39 compute-0 kernel: tapdaff63ed-e0: entered promiscuous mode
Jan 31 10:29:39 compute-0 NetworkManager[56281]: <info>  [1769855379.4387] manager: (tapdaff63ed-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.439 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.442 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaff63ed-e0, col_values=(('external_ids', {'iface-id': '404b365a-e2a6-44ef-901a-47776982a8d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.444 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 ovn_controller[97627]: 2026-01-31T10:29:39Z|00096|binding|INFO|Releasing lport 404b365a-e2a6-44ef-901a-47776982a8d0 from this chassis (sb_readonly=0)
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.445 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.446 106883 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/daff63ed-ea84-45dc-9745-1af596db4581.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/daff63ed-ea84-45dc-9745-1af596db4581.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.447 238337 DEBUG oslo.privsep.daemon [-] privsep: reply[3041bbfa-7917-4a5f-aec9-74d1bf7d27d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.449 106883 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: global
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     log         /dev/log local0 debug
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     log-tag     haproxy-metadata-proxy-daff63ed-ea84-45dc-9745-1af596db4581
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     user        root
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     group       root
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     maxconn     1024
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     pidfile     /var/lib/neutron/external/pids/daff63ed-ea84-45dc-9745-1af596db4581.pid.haproxy
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     daemon
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: defaults
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     log global
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     mode http
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     option httplog
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     option dontlognull
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     option http-server-close
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     option forwardfor
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     retries                 3
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     timeout http-request    30s
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     timeout connect         30s
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     timeout client          32s
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     timeout server          32s
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     timeout http-keep-alive 30s
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: listen listener
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     bind 169.254.169.254:80
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:     http-request add-header X-OVN-Network-ID daff63ed-ea84-45dc-9745-1af596db4581
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 10:29:39 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:29:39.449 106883 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581', 'env', 'PROCESS_TAG=haproxy-daff63ed-ea84-45dc-9745-1af596db4581', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/daff63ed-ea84-45dc-9745-1af596db4581.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 10:29:39 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.459 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.484 185198 DEBUG nova.network.neutron [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Updated VIF entry in instance network info cache for port 56ca1581-424b-4d3f-8fdb-91c955b6ada3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.485 185198 DEBUG nova.network.neutron [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Updating instance_info_cache with network_info: [{"id": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "address": "fa:16:3e:14:d0:f6", "network": {"id": "daff63ed-ea84-45dc-9745-1af596db4581", "bridge": "br-int", "label": "tempest-network-smoke--2037938321", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cbe14d6b97b64029ae17f2f239669a6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56ca1581-42", "ovs_interfaceid": "56ca1581-424b-4d3f-8fdb-91c955b6ada3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:29:39 compute-0 podman[251973]: 2026-01-31 10:29:39.836658968 +0000 UTC m=+0.068037158 container create 7df2fa0bcbe6ab4d7304e3954f4862f01eece43503ee2e6e21ce713add68a584 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:29:39 compute-0 systemd[1]: Started libpod-conmon-7df2fa0bcbe6ab4d7304e3954f4862f01eece43503ee2e6e21ce713add68a584.scope.
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.890 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855379.8900104, a0208ed9-3ae5-4499-81e2-3f2cb621f74d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:29:39 compute-0 nova_compute[185194]: 2026-01-31 10:29:39.891 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] VM Started (Lifecycle Event)
Jan 31 10:29:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 10:29:39 compute-0 podman[251973]: 2026-01-31 10:29:39.805572562 +0000 UTC m=+0.036950732 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 10:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3903f6afb8d016eab58f54e93fa3f742c7fce4bdf605aca63b5b19e4416d730f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 10:29:39 compute-0 podman[251973]: 2026-01-31 10:29:39.923663103 +0000 UTC m=+0.155041303 container init 7df2fa0bcbe6ab4d7304e3954f4862f01eece43503ee2e6e21ce713add68a584 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 10:29:39 compute-0 podman[251973]: 2026-01-31 10:29:39.933328991 +0000 UTC m=+0.164707141 container start 7df2fa0bcbe6ab4d7304e3954f4862f01eece43503ee2e6e21ce713add68a584 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 10:29:39 compute-0 neutron-haproxy-ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581[251995]: [NOTICE]   (251999) : New worker (252001) forked
Jan 31 10:29:39 compute-0 neutron-haproxy-ovnmeta-daff63ed-ea84-45dc-9745-1af596db4581[251995]: [NOTICE]   (251999) : Loading success.
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.031 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.101 185198 DEBUG oslo_concurrency.lockutils [req-e49140b0-2d2b-4ef5-bf0e-f83be695baf4 req-32ea2136-48f6-4cc1-96fa-b0ed15d3f469 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-a0208ed9-3ae5-4499-81e2-3f2cb621f74d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:29:40 compute-0 ovn_controller[97627]: 2026-01-31T10:29:40Z|00097|binding|INFO|Releasing lport 27135e8d-d657-4300-a1f6-02bc2c0e93f0 from this chassis (sb_readonly=0)
Jan 31 10:29:40 compute-0 ovn_controller[97627]: 2026-01-31T10:29:40Z|00098|binding|INFO|Releasing lport 404b365a-e2a6-44ef-901a-47776982a8d0 from this chassis (sb_readonly=0)
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.228 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.337 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.345 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855379.8903172, a0208ed9-3ae5-4499-81e2-3f2cb621f74d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.346 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] VM Paused (Lifecycle Event)
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.464 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.472 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:29:40 compute-0 nova_compute[185194]: 2026-01-31 10:29:40.579 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:29:42 compute-0 nova_compute[185194]: 2026-01-31 10:29:42.603 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:42 compute-0 podman[252011]: 2026-01-31 10:29:42.96787912 +0000 UTC m=+0.088123943 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi)
Jan 31 10:29:42 compute-0 podman[252010]: 2026-01-31 10:29:42.994784683 +0000 UTC m=+0.113269873 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, config_id=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git)
Jan 31 10:29:45 compute-0 nova_compute[185194]: 2026-01-31 10:29:45.035 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:47 compute-0 nova_compute[185194]: 2026-01-31 10:29:47.607 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:48 compute-0 nova_compute[185194]: 2026-01-31 10:29:48.656 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:49 compute-0 nova_compute[185194]: 2026-01-31 10:29:49.409 185198 DEBUG nova.network.neutron [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updated VIF entry in instance network info cache for port b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 10:29:49 compute-0 nova_compute[185194]: 2026-01-31 10:29:49.410 185198 DEBUG nova.network.neutron [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:29:49 compute-0 nova_compute[185194]: 2026-01-31 10:29:49.713 185198 DEBUG oslo_concurrency.lockutils [req-03798ee3-6c92-4525-9cb5-fa68dadbb223 req-bfa4c9c7-20f7-46d9-81e2-aa0f1f11cd01 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:29:50 compute-0 nova_compute[185194]: 2026-01-31 10:29:50.039 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:50 compute-0 nova_compute[185194]: 2026-01-31 10:29:50.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:29:50 compute-0 nova_compute[185194]: 2026-01-31 10:29:50.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 10:29:52 compute-0 nova_compute[185194]: 2026-01-31 10:29:52.610 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:52 compute-0 podman[252048]: 2026-01-31 10:29:52.978046791 +0000 UTC m=+0.098515159 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:29:54 compute-0 nova_compute[185194]: 2026-01-31 10:29:54.286 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:55 compute-0 nova_compute[185194]: 2026-01-31 10:29:55.042 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:57 compute-0 nova_compute[185194]: 2026-01-31 10:29:57.613 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:29:58 compute-0 podman[252072]: 2026-01-31 10:29:58.013960919 +0000 UTC m=+0.125140116 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, com.redhat.component=ubi9-minimal-container, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.expose-services=, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-01-22T05:09:47Z, managed_by=edpm_ansible, release=1769056855, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7)
Jan 31 10:29:58 compute-0 podman[252091]: 2026-01-31 10:29:58.137760651 +0000 UTC m=+0.119579519 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 10:29:59 compute-0 podman[201068]: time="2026-01-31T10:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:29:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29935 "" "Go-http-client/1.1"
Jan 31 10:29:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4857 "" "Go-http-client/1.1"
Jan 31 10:30:00 compute-0 nova_compute[185194]: 2026-01-31 10:30:00.046 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:01 compute-0 openstack_network_exporter[204162]: ERROR   10:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:30:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:30:01 compute-0 openstack_network_exporter[204162]: ERROR   10:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:30:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
Jan 31 10:30:02 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:02 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1089, in _commit_impl\n    self.engine.dialect.do_commit(self.connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 686, in do_commit\n    dbapi_connection.commit()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 422, in commit\n    self._read_ok_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 396, in _read_ok_packet\n    pkt = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__\n    next(self.gen)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1094, in _transaction_scope\n    yield resource\n', '  File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__\n    next(self.gen)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 696, in _session\n    self.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 693, in _session\n    self._end_session_transaction(self.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 721, in _end_session_transaction\n    session.commit()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1451, in commit\n    self._transaction.commit(_to_root=self.future)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 836, in commit\n    trans.commit()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2459, in commit\n    self._do_commit()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2649, in _do_commit\n    self._connection_commit_impl()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2620, in _connection_commit_impl\n    self.connection._commit_impl()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1091, in _commit_impl\n    self._handle_dbapi_exception(e, None, None, None, None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1089, in _commit_impl\n    self.engine.dialect.do_commit(self.connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 686, in do_commit\n    dbapi_connection.commit()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 422, in commit\n    self._read_ok_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 396, in _read_ok_packet\n    pkt = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n"].
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db     raise result
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1089, in _commit_impl\n    self.engine.dialect.do_commit(self.connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 686, in do_commit\n    dbapi_connection.commit()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 422, in commit\n    self._read_ok_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 396, in _read_ok_packet\n    pkt = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__\n    next(self.gen)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1094, in _transaction_scope\n    yield resource\n', '  File "/usr/lib64/python3.9/contextlib.py", line 126, in __exit__\n    next(self.gen)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 696, in _session\n    self.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 693, in _session\n    self._end_session_transaction(self.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 721, in _end_session_transaction\n    session.commit()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1451, in commit\n    self._transaction.commit(_to_root=self.future)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 836, in commit\n    trans.commit()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2459, in commit\n    self._do_commit()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2649, in _do_commit\n    self._connection_commit_impl()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2620, in _connection_commit_impl\n    self.connection._commit_impl()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1091, in _commit_impl\n    self._handle_dbapi_exception(e, None, None, None, None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1089, in _commit_impl\n    self.engine.dialect.do_commit(self.connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 686, in do_commit\n    dbapi_connection.commit()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 422, in commit\n    self._read_ok_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 396, in _read_ok_packet\n    pkt = self._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n"].
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.239 185198 ERROR nova.servicegroup.drivers.db 
Jan 31 10:30:02 compute-0 nova_compute[185194]: 2026-01-31 10:30:02.615 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:04 compute-0 nova_compute[185194]: 2026-01-31 10:30:04.305 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.051 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:05 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:05 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     task(self, context)
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9863, in _heal_instance_info_cache
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     db_instances = objects.InstanceList.get_by_host(
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task     raise result
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:05 compute-0 nova_compute[185194]: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task 
Jan 31 10:30:06 compute-0 rsyslogd[235457]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:06 compute-0 rsyslogd[235457]: message too long (8622) with configured size 8096, begin of message is: 2026-01-31 10:30:05.815 185198 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:07 compute-0 podman[252111]: 2026-01-31 10:30:07.009578628 +0000 UTC m=+0.125974086 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260126, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 31 10:30:07 compute-0 podman[252110]: 2026-01-31 10:30:07.025844499 +0000 UTC m=+0.139807917 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 10:30:07 compute-0 nova_compute[185194]: 2026-01-31 10:30:07.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:07 compute-0 nova_compute[185194]: 2026-01-31 10:30:07.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:07 compute-0 nova_compute[185194]: 2026-01-31 10:30:07.618 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:08 compute-0 nova_compute[185194]: 2026-01-31 10:30:08.602 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Error during ComputeManager._sync_scheduler_instance_info: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:09 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:09 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     task(self, context)
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 2236, in _sync_scheduler_instance_info
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_host(context, self.host,
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task     raise result
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task 
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.071 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:09 compute-0 rsyslogd[235457]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:09 compute-0 rsyslogd[235457]: message too long (8622) with configured size 8096, begin of message is: 2026-01-31 10:30:09.069 185198 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:09 compute-0 nova_compute[185194]: 2026-01-31 10:30:09.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:09 compute-0 podman[252155]: 2026-01-31 10:30:09.973131696 +0000 UTC m=+0.092517351 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.055 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Error during ComputeManager._cleanup_expired_console_auth_tokens: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:10 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:10 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     task(self, context)
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11282, in _cleanup_expired_console_auth_tokens
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     objects.ConsoleAuthToken.clean_expired_console_auths(context)
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task     raise result
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task 
Jan 31 10:30:10 compute-0 rsyslogd[235457]: message too long (8183) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:10 compute-0 rsyslogd[235457]: message too long (8247) with configured size 8096, begin of message is: 2026-01-31 10:30:10.388 185198 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:10 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:10 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db     raise result
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:10 compute-0 nova_compute[185194]: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db 
Jan 31 10:30:11 compute-0 rsyslogd[235457]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:11 compute-0 rsyslogd[235457]: message too long (9052) with configured size 8096, begin of message is: 2026-01-31 10:30:10.914 185198 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:11 compute-0 nova_compute[185194]: 2026-01-31 10:30:11.389 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:11 compute-0 nova_compute[185194]: 2026-01-31 10:30:11.390 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:30:11 compute-0 nova_compute[185194]: 2026-01-31 10:30:11.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:12 compute-0 nova_compute[185194]: 2026-01-31 10:30:12.622 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:13 compute-0 podman[252180]: 2026-01-31 10:30:13.997553444 +0000 UTC m=+0.103799849 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 10:30:14 compute-0 podman[252179]: 2026-01-31 10:30:14.008467853 +0000 UTC m=+0.117071207 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=kepler, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container)
Jan 31 10:30:15 compute-0 nova_compute[185194]: 2026-01-31 10:30:15.058 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:15 compute-0 nova_compute[185194]: 2026-01-31 10:30:15.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:30:16.461 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:30:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:30:16.461 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:30:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:30:16.463 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:16 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:16 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     task(self, context)
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task     raise result
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:16 compute-0 nova_compute[185194]: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task 
Jan 31 10:30:16 compute-0 rsyslogd[235457]: message too long (8132) with configured size 8096, begin of message is: 2026-01-31 10:30:16.558 185198 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:17 compute-0 nova_compute[185194]: 2026-01-31 10:30:17.626 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:20 compute-0 nova_compute[185194]: 2026-01-31 10:30:20.063 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:21 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:21 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db     raise result
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:21 compute-0 nova_compute[185194]: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db 
Jan 31 10:30:21 compute-0 rsyslogd[235457]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:21 compute-0 rsyslogd[235457]: message too long (9052) with configured size 8096, begin of message is: 2026-01-31 10:30:21.594 185198 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:22 compute-0 nova_compute[185194]: 2026-01-31 10:30:22.628 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:23 compute-0 podman[252233]: 2026-01-31 10:30:23.96827938 +0000 UTC m=+0.086103443 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 31 10:30:24 compute-0 ovn_controller[97627]: 2026-01-31T10:30:24Z|00099|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 10:30:25 compute-0 nova_compute[185194]: 2026-01-31 10:30:25.067 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:27 compute-0 nova_compute[185194]: 2026-01-31 10:30:27.631 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:28 compute-0 podman[252258]: 2026-01-31 10:30:28.985016335 +0000 UTC m=+0.089865016 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, org.opencontainers.image.created=2026-01-22T05:09:47Z, release=1769056855, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, build-date=2026-01-22T05:09:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal)
Jan 31 10:30:28 compute-0 podman[252257]: 2026-01-31 10:30:28.996036807 +0000 UTC m=+0.107552582 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 10:30:29 compute-0 podman[201068]: time="2026-01-31T10:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:30:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29935 "" "Go-http-client/1.1"
Jan 31 10:30:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4848 "" "Go-http-client/1.1"
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.070 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.606 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.706 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.706 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.707 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fd948d247d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.707 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25040>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27a40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27aa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.708 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27b00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d25310>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd94bc69b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.709 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27d70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d246b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.710 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d27f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d26f90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d247a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.711 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fd948d24fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd943f8d760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.712 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:30 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:30.713 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Error during ComputeManager._run_pending_deletes: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:30 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:30 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     task(self, context)
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11152, in _run_pending_deletes
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_filters(
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task     raise result
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task 
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:30 compute-0 nova_compute[185194]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:30 compute-0 nova_compute[185194]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db     raise result
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 31 10:30:30 compute-0 nova_compute[185194]: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db 
Jan 31 10:30:31 compute-0 rsyslogd[235457]: message too long (9083) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:31 compute-0 rsyslogd[235457]: message too long (9147) with configured size 8096, begin of message is: 2026-01-31 10:30:30.976 185198 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:31 compute-0 rsyslogd[235457]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:31 compute-0 rsyslogd[235457]: message too long (9052) with configured size 8096, begin of message is: 2026-01-31 10:30:30.978 185198 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.093 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:30 GMT Server: Apache x-compute-request-id: req-6d1a20ea-01e8-4289-9ea2-05a3ed9af56c x-openstack-request-id: req-6d1a20ea-01e8-4289-9ea2-05a3ed9af56c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.093 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.093 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-6d1a20ea-01e8-4289-9ea2-05a3ed9af56c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-6d1a20ea-01e8-4289-9ea2-05a3ed9af56c): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-6d1a20ea-01e8-4289-9ea2-05a3ed9af56c)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-6d1a20ea-01e8-4289-9ea2-05a3ed9af56c)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.094 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fd948d25010>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.102 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.103 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:31 GMT Server: Apache x-compute-request-id: req-d6da36df-496b-4283-a8a2-0387274de389 x-openstack-request-id: req-d6da36df-496b-4283-a8a2-0387274de389 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-d6da36df-496b-4283-a8a2-0387274de389 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d6da36df-496b-4283-a8a2-0387274de389): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d6da36df-496b-4283-a8a2-0387274de389)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d6da36df-496b-4283-a8a2-0387274de389)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.407 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.409 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fd948d24830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.411 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.412 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:31 compute-0 openstack_network_exporter[204162]: ERROR   10:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:30:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:30:31 compute-0 openstack_network_exporter[204162]: ERROR   10:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:30:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.573 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:31 GMT Server: Apache x-compute-request-id: req-ff0711a9-ddf7-4b33-9485-96684150a57c x-openstack-request-id: req-ff0711a9-ddf7-4b33-9485-96684150a57c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-ff0711a9-ddf7-4b33-9485-96684150a57c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-ff0711a9-ddf7-4b33-9485-96684150a57c): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-ff0711a9-ddf7-4b33-9485-96684150a57c)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-ff0711a9-ddf7-4b33-9485-96684150a57c)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.574 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.576 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fd948d265d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.581 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.583 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.716 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:31 GMT Server: Apache x-compute-request-id: req-3fc1096f-4815-4dec-976a-dd03d3ecf3ed x-openstack-request-id: req-3fc1096f-4815-4dec-976a-dd03d3ecf3ed _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.716 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-3fc1096f-4815-4dec-976a-dd03d3ecf3ed request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-3fc1096f-4815-4dec-976a-dd03d3ecf3ed): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-3fc1096f-4815-4dec-976a-dd03d3ecf3ed)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-3fc1096f-4815-4dec-976a-dd03d3ecf3ed)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.717 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.721 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fd948d27950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.727 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.728 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.852 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:31 GMT Server: Apache x-compute-request-id: req-72b099df-49b3-4cab-b87a-7f95ac79be1a x-openstack-request-id: req-72b099df-49b3-4cab-b87a-7f95ac79be1a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.852 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.853 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-72b099df-49b3-4cab-b87a-7f95ac79be1a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-72b099df-49b3-4cab-b87a-7f95ac79be1a): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-72b099df-49b3-4cab-b87a-7f95ac79be1a)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-72b099df-49b3-4cab-b87a-7f95ac79be1a)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.854 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fd948d24a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.862 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.864 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.955 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:31 GMT Server: Apache x-compute-request-id: req-953f3f92-e3bb-4610-803f-d4903c95e4a2 x-openstack-request-id: req-953f3f92-e3bb-4610-803f-d4903c95e4a2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.956 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.957 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-953f3f92-e3bb-4610-803f-d4903c95e4a2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-953f3f92-e3bb-4610-803f-d4903c95e4a2): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-953f3f92-e3bb-4610-803f-d4903c95e4a2)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-953f3f92-e3bb-4610-803f-d4903c95e4a2)
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.958 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fd948d27a70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.967 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:31 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:31.969 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.218 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:32 GMT Server: Apache x-compute-request-id: req-d5500057-42a1-4fee-967c-479c90c4372c x-openstack-request-id: req-d5500057-42a1-4fee-967c-479c90c4372c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-d5500057-42a1-4fee-967c-479c90c4372c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d5500057-42a1-4fee-967c-479c90c4372c): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d5500057-42a1-4fee-967c-479c90c4372c)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d5500057-42a1-4fee-967c-479c90c4372c)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.219 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.222 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fd948d24aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.228 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.229 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.378 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:32 GMT Server: Apache x-compute-request-id: req-d4f9b880-6be4-4002-a788-2863f14afe56 x-openstack-request-id: req-d4f9b880-6be4-4002-a788-2863f14afe56 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-d4f9b880-6be4-4002-a788-2863f14afe56 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d4f9b880-6be4-4002-a788-2863f14afe56): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d4f9b880-6be4-4002-a788-2863f14afe56)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d4f9b880-6be4-4002-a788-2863f14afe56)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.379 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.382 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fd948d27ad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.388 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.389 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:32 compute-0 nova_compute[185194]: 2026-01-31 10:30:32.633 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.680 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:32 GMT Server: Apache x-compute-request-id: req-d4a91d43-900b-4035-9d67-9dd03f37dbcf x-openstack-request-id: req-d4a91d43-900b-4035-9d67-9dd03f37dbcf _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.680 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.680 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-d4a91d43-900b-4035-9d67-9dd03f37dbcf request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d4a91d43-900b-4035-9d67-9dd03f37dbcf): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-d4a91d43-900b-4035-9d67-9dd03f37dbcf)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-d4a91d43-900b-4035-9d67-9dd03f37dbcf)
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.681 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fd948d252e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.686 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:32 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:32.687 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:32 GMT Server: Apache x-compute-request-id: req-1d08a2b5-3304-4868-af5b-3e05ff8b2227 x-openstack-request-id: req-1d08a2b5-3304-4868-af5b-3e05ff8b2227 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-1d08a2b5-3304-4868-af5b-3e05ff8b2227 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-1d08a2b5-3304-4868-af5b-3e05ff8b2227): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-1d08a2b5-3304-4868-af5b-3e05ff8b2227)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-1d08a2b5-3304-4868-af5b-3e05ff8b2227)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.015 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fd948d246e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.027 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.029 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.111 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:33 GMT Server: Apache x-compute-request-id: req-2acd0ab7-0aa2-4fd9-9900-a393f77e8b31 x-openstack-request-id: req-2acd0ab7-0aa2-4fd9-9900-a393f77e8b31 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-2acd0ab7-0aa2-4fd9-9900-a393f77e8b31 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-2acd0ab7-0aa2-4fd9-9900-a393f77e8b31): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-2acd0ab7-0aa2-4fd9-9900-a393f77e8b31)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-2acd0ab7-0aa2-4fd9-9900-a393f77e8b31)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.112 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fd948d27fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.119 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.121 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.569 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:33 GMT Server: Apache x-compute-request-id: req-bee209cf-4730-4794-a210-b8dab647abe8 x-openstack-request-id: req-bee209cf-4730-4794-a210-b8dab647abe8 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.569 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.569 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-bee209cf-4730-4794-a210-b8dab647abe8 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-bee209cf-4730-4794-a210-b8dab647abe8): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-bee209cf-4730-4794-a210-b8dab647abe8)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-bee209cf-4730-4794-a210-b8dab647abe8)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.570 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.573 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fd948d27c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.577 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.578 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.893 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:33 GMT Server: Apache x-compute-request-id: req-c6546853-5de1-4470-966c-c397aadc6264 x-openstack-request-id: req-c6546853-5de1-4470-966c-c397aadc6264 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.893 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-c6546853-5de1-4470-966c-c397aadc6264 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-c6546853-5de1-4470-966c-c397aadc6264): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-c6546853-5de1-4470-966c-c397aadc6264)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-c6546853-5de1-4470-966c-c397aadc6264)
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.894 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.896 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fd948d27f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.901 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:33 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:33.903 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.026 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:33 GMT Server: Apache x-compute-request-id: req-50d06315-7608-4396-b570-15a3e7a712bc x-openstack-request-id: req-50d06315-7608-4396-b570-15a3e7a712bc _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.027 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.027 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-50d06315-7608-4396-b570-15a3e7a712bc request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-50d06315-7608-4396-b570-15a3e7a712bc): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-50d06315-7608-4396-b570-15a3e7a712bc)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-50d06315-7608-4396-b570-15a3e7a712bc)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.028 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.031 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fd948d27d40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.036 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.038 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.302 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:34 GMT Server: Apache x-compute-request-id: req-bf7cac95-3344-4e6c-aa2a-7515bf1ff31f x-openstack-request-id: req-bf7cac95-3344-4e6c-aa2a-7515bf1ff31f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.302 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.302 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-bf7cac95-3344-4e6c-aa2a-7515bf1ff31f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-bf7cac95-3344-4e6c-aa2a-7515bf1ff31f): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-bf7cac95-3344-4e6c-aa2a-7515bf1ff31f)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-bf7cac95-3344-4e6c-aa2a-7515bf1ff31f)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.303 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fd948d24b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.311 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.312 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:34 GMT Server: Apache x-compute-request-id: req-82dcb226-bb2a-4a52-999d-da6423ca8b0b x-openstack-request-id: req-82dcb226-bb2a-4a52-999d-da6423ca8b0b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-82dcb226-bb2a-4a52-999d-da6423ca8b0b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-82dcb226-bb2a-4a52-999d-da6423ca8b0b): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-82dcb226-bb2a-4a52-999d-da6423ca8b0b)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-82dcb226-bb2a-4a52-999d-da6423ca8b0b)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.449 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.451 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fd948d26540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.454 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.455 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:34 GMT Server: Apache x-compute-request-id: req-a0cb299f-6839-4f26-90dc-fc3f64ed156c x-openstack-request-id: req-a0cb299f-6839-4f26-90dc-fc3f64ed156c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-a0cb299f-6839-4f26-90dc-fc3f64ed156c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-a0cb299f-6839-4f26-90dc-fc3f64ed156c): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-a0cb299f-6839-4f26-90dc-fc3f64ed156c)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-a0cb299f-6839-4f26-90dc-fc3f64ed156c)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.564 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fd948d27dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.569 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.570 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.680 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:34 GMT Server: Apache x-compute-request-id: req-fe8b7d7f-506f-4b51-90c9-95b23a089eb8 x-openstack-request-id: req-fe8b7d7f-506f-4b51-90c9-95b23a089eb8 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-fe8b7d7f-506f-4b51-90c9-95b23a089eb8 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-fe8b7d7f-506f-4b51-90c9-95b23a089eb8): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-fe8b7d7f-506f-4b51-90c9-95b23a089eb8)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-fe8b7d7f-506f-4b51-90c9-95b23a089eb8)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.681 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fd948d27e60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.689 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.690 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.898 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:34 GMT Server: Apache x-compute-request-id: req-e5728332-6147-4836-aa36-24d7d052c729 x-openstack-request-id: req-e5728332-6147-4836-aa36-24d7d052c729 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-e5728332-6147-4836-aa36-24d7d052c729 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-e5728332-6147-4836-aa36-24d7d052c729): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-e5728332-6147-4836-aa36-24d7d052c729)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-e5728332-6147-4836-aa36-24d7d052c729)
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.899 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.902 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fd948d24680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.905 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:34 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:34.906 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:34 GMT Server: Apache x-compute-request-id: req-7f764f14-5afe-4931-8f75-fab615eb44d1 x-openstack-request-id: req-7f764f14-5afe-4931-8f75-fab615eb44d1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-7f764f14-5afe-4931-8f75-fab615eb44d1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-7f764f14-5afe-4931-8f75-fab615eb44d1): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-7f764f14-5afe-4931-8f75-fab615eb44d1)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-7f764f14-5afe-4931-8f75-fab615eb44d1)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.022 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.025 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fd948d27ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.029 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.030 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:35 compute-0 nova_compute[185194]: 2026-01-31 10:30:35.073 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.136 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:35 GMT Server: Apache x-compute-request-id: req-11633d22-81ed-4fcf-ba86-f219916ec850 x-openstack-request-id: req-11633d22-81ed-4fcf-ba86-f219916ec850 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-11633d22-81ed-4fcf-ba86-f219916ec850 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-11633d22-81ed-4fcf-ba86-f219916ec850): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-11633d22-81ed-4fcf-ba86-f219916ec850)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-11633d22-81ed-4fcf-ba86-f219916ec850)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.137 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.139 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fd948d24710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.143 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.145 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.258 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:35 GMT Server: Apache x-compute-request-id: req-f27de81f-ca13-4083-9094-f4217ff63be3 x-openstack-request-id: req-f27de81f-ca13-4083-9094-f4217ff63be3 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.258 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-f27de81f-ca13-4083-9094-f4217ff63be3 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-f27de81f-ca13-4083-9094-f4217ff63be3): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-f27de81f-ca13-4083-9094-f4217ff63be3)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-f27de81f-ca13-4083-9094-f4217ff63be3)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.259 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.264 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fd948d27f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.268 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.268 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.359 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:35 GMT Server: Apache x-compute-request-id: req-51300342-43af-43ff-85ea-1274579e1537 x-openstack-request-id: req-51300342-43af-43ff-85ea-1274579e1537 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.360 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.360 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-51300342-43af-43ff-85ea-1274579e1537 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-51300342-43af-43ff-85ea-1274579e1537): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-51300342-43af-43ff-85ea-1274579e1537)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-51300342-43af-43ff-85ea-1274579e1537)
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.361 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fd948d27020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.367 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:35 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:35.367 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.106 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:35 GMT Server: Apache x-compute-request-id: req-c83e1716-f972-49a0-a0c9-7ff3bc0a1e34 x-openstack-request-id: req-c83e1716-f972-49a0-a0c9-7ff3bc0a1e34 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-c83e1716-f972-49a0-a0c9-7ff3bc0a1e34 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-c83e1716-f972-49a0-a0c9-7ff3bc0a1e34): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-c83e1716-f972-49a0-a0c9-7ff3bc0a1e34)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-c83e1716-f972-49a0-a0c9-7ff3bc0a1e34)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.107 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fd948d24770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.115 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.116 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.234 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:36 GMT Server: Apache x-compute-request-id: req-b9141529-8b58-4ec6-9b9a-67cf90519299 x-openstack-request-id: req-b9141529-8b58-4ec6-9b9a-67cf90519299 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-b9141529-8b58-4ec6-9b9a-67cf90519299 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-b9141529-8b58-4ec6-9b9a-67cf90519299): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-b9141529-8b58-4ec6-9b9a-67cf90519299)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-b9141529-8b58-4ec6-9b9a-67cf90519299)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.235 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.240 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fd948d24da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fd948cd8140>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.245 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.246 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}54c00b395e4db719d10bf69b02382af16f778552ed0bf41886bba179e47fec9c" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.404 14 DEBUG novaclient.v2.client [-] RESP: [503] Connection: close Content-Length: 218 Content-Type: application/json Date: Sat, 31 Jan 2026 10:30:36 GMT Server: Apache x-compute-request-id: req-920b0156-8fa1-4723-8763-5ca1c4fee67f x-openstack-request-id: req-920b0156-8fa1-4723-8763-5ca1c4fee67f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.405 14 DEBUG novaclient.v2.client [-] RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.405 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a0208ed9-3ae5-4499-81e2-3f2cb621f74d used request id req-920b0156-8fa1-4723-8763-5ca1c4fee67f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager [-] Unable to discover resources: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-920b0156-8fa1-4723-8763-5ca1c4fee67f): novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]:  (HTTP 503) (Request-ID: req-920b0156-8fa1-4723-8763-5ca1c4fee67f)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager Traceback (most recent call last):
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/polling/manager.py", line 959, in discover
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     discovered = discoverer.discover(self, param)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 125, in discover
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return self.discover_libvirt_polling(manager, param=None)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return self(f, *args, **kw)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     do = self.iter(retry_state=retry_state)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return fut.result()
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 449, in result
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return self.__get_result()
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib64/python3.12/concurrent/futures/_base.py", line 401, in __get_result
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     raise self._exception
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     result = fn(*args, **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 209, in discover_libvirt_polling
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     "id": self._get_flavor_id(flavor_xml, instance_id),
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 143, in _get_flavor_id
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     server = self.get_server(instance_id)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/cachetools/__init__.py", line 803, in wrapper
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     v = method(self, *args, **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py", line 178, in get_server
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return self.nova_cli.nova_client.servers.get(uuid)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/v2/servers.py", line 977, in get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return self._get("/servers/%s" % base.getid(server), "server")
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/base.py", line 352, in _get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     resp, body = self.api.client.get(url)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager                  ^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/keystoneauth1/adapter.py", line 672, in get
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     return self.request(url, 'GET', **kwargs)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager   File "/usr/lib/python3.12/site-packages/novaclient/client.py", line 78, in request
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager     raise exceptions.from_response(resp, body, url, method)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager novaclient.exceptions.ClientException: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager The Keystone service is temporarily unavailable.
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager  (HTTP 503) (Request-ID: req-920b0156-8fa1-4723-8763-5ca1c4fee67f)
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.406 14 ERROR ceilometer.polling.manager 
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.409 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:36 compute-0 ceilometer_agent_compute[194915]: 2026-01-31 10:30:36.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 31 10:30:37 compute-0 nova_compute[185194]: 2026-01-31 10:30:37.635 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:38 compute-0 podman[252298]: 2026-01-31 10:30:38.017782117 +0000 UTC m=+0.137863189 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 10:30:38 compute-0 podman[252299]: 2026-01-31 10:30:38.021118959 +0000 UTC m=+0.133214915 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute)
Jan 31 10:30:40 compute-0 nova_compute[185194]: 2026-01-31 10:30:40.076 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:40 compute-0 nova_compute[185194]: 2026-01-31 10:30:40.935 185198 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Jan 31 10:30:40 compute-0 podman[252341]: 2026-01-31 10:30:40.977037267 +0000 UTC m=+0.090924443 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 31 10:30:42 compute-0 nova_compute[185194]: 2026-01-31 10:30:42.638 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:30:42.875 106883 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'b6:e8:5f', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0a:82:0b:51:0d:c7'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 10:30:42 compute-0 nova_compute[185194]: 2026-01-31 10:30:42.876 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:42 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:30:42.877 106883 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.634 185198 DEBUG nova.compute.manager [req-5ca63413-a720-4482-9bb7-7151e531e209 req-2d4b46a3-2f9d-4037-851f-a2a8a30bd471 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Received event network-vif-plugged-56ca1581-424b-4d3f-8fdb-91c955b6ada3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.635 185198 DEBUG oslo_concurrency.lockutils [req-5ca63413-a720-4482-9bb7-7151e531e209 req-2d4b46a3-2f9d-4037-851f-a2a8a30bd471 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Acquiring lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.636 185198 DEBUG oslo_concurrency.lockutils [req-5ca63413-a720-4482-9bb7-7151e531e209 req-2d4b46a3-2f9d-4037-851f-a2a8a30bd471 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.637 185198 DEBUG oslo_concurrency.lockutils [req-5ca63413-a720-4482-9bb7-7151e531e209 req-2d4b46a3-2f9d-4037-851f-a2a8a30bd471 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.638 185198 DEBUG nova.compute.manager [req-5ca63413-a720-4482-9bb7-7151e531e209 req-2d4b46a3-2f9d-4037-851f-a2a8a30bd471 cac95154f7b046bd89e3638f252a9e14 e68e3cc0ba3f47ec8138d9cffe4125f2 - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Processing event network-vif-plugged-56ca1581-424b-4d3f-8fdb-91c955b6ada3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.639 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Instance event wait completed in 63 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.646 185198 DEBUG nova.virt.driver [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] Emitting event <LifecycleEvent: 1769855443.6459558, a0208ed9-3ae5-4499-81e2-3f2cb621f74d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.647 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] VM Resumed (Lifecycle Event)
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.651 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.658 185198 INFO nova.virt.libvirt.driver [-] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Instance spawned successfully.
Jan 31 10:30:43 compute-0 nova_compute[185194]: 2026-01-31 10:30:43.659 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.063 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.070 185198 DEBUG nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.200 185198 INFO nova.compute.manager [None req-816ece81-9b7d-42d6-9ee2-3db28771d5f6 - - - - - -] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.385 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.386 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.387 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.388 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.389 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.390 185198 DEBUG nova.virt.libvirt.driver [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 10:30:44 compute-0 podman[252366]: 2026-01-31 10:30:44.809296679 +0000 UTC m=+0.106304482 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 10:30:44 compute-0 podman[252365]: 2026-01-31 10:30:44.828648326 +0000 UTC m=+0.126874119 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 31 10:30:44 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:30:44.879 106883 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=089e34f1-a6ad-49ae-8ce3-e9f7773bc2da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.965 185198 INFO nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Took 84.90 seconds to spawn the instance on the hypervisor.
Jan 31 10:30:44 compute-0 nova_compute[185194]: 2026-01-31 10:30:44.966 185198 DEBUG nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 10:30:45 compute-0 nova_compute[185194]: 2026-01-31 10:30:45.078 185198 INFO nova.compute.manager [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] [instance: a0208ed9-3ae5-4499-81e2-3f2cb621f74d] Took 85.74 seconds to build instance.
Jan 31 10:30:45 compute-0 nova_compute[185194]: 2026-01-31 10:30:45.080 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:45 compute-0 nova_compute[185194]: 2026-01-31 10:30:45.394 185198 DEBUG oslo_concurrency.lockutils [None req-cd2416ea-887e-41cc-94f6-6af6427cb5d6 0e738cdb50d649b9968d2e1cdea3b9cb cbe14d6b97b64029ae17f2f239669a6f - - default default] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 86.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:30:47 compute-0 nova_compute[185194]: 2026-01-31 10:30:47.640 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:48 compute-0 ovn_controller[97627]: 2026-01-31T10:30:48Z|00100|memory|INFO|peak resident set size grew 50% in last 2875.1 seconds, from 16000 kB to 24000 kB
Jan 31 10:30:48 compute-0 ovn_controller[97627]: 2026-01-31T10:30:48Z|00101|memory|INFO|idl-cells-OVN_Southbound:10015 idl-cells-Open_vSwitch:813 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:339 lflow-cache-entries-cache-matches:287 lflow-cache-size-KB:1424 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:599 ofctrl_installed_flow_usage-KB:436 ofctrl_sb_flow_ref_usage-KB:229
Jan 31 10:30:48 compute-0 nova_compute[185194]: 2026-01-31 10:30:48.776 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:50 compute-0 nova_compute[185194]: 2026-01-31 10:30:50.083 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:52 compute-0 nova_compute[185194]: 2026-01-31 10:30:52.642 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:54 compute-0 podman[252403]: 2026-01-31 10:30:54.997691507 +0000 UTC m=+0.121712181 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.034 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.087 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.114 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid 2c7f9a83-17b3-4e0f-8936-9e6a19920064 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.115 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Triggering sync for uuid a0208ed9-3ae5-4499-81e2-3f2cb621f74d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.116 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.117 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.118 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.119 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.160 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "2c7f9a83-17b3-4e0f-8936-9e6a19920064" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:30:55 compute-0 nova_compute[185194]: 2026-01-31 10:30:55.176 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "a0208ed9-3ae5-4499-81e2-3f2cb621f74d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:30:57 compute-0 nova_compute[185194]: 2026-01-31 10:30:57.646 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:30:59 compute-0 podman[201068]: time="2026-01-31T10:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:30:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29935 "" "Go-http-client/1.1"
Jan 31 10:30:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4845 "" "Go-http-client/1.1"
Jan 31 10:31:00 compute-0 podman[252427]: 2026-01-31 10:31:00.012721815 +0000 UTC m=+0.122943151 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.openshift.expose-services=, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Jan 31 10:31:00 compute-0 podman[252426]: 2026-01-31 10:31:00.021871091 +0000 UTC m=+0.137326906 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 10:31:00 compute-0 nova_compute[185194]: 2026-01-31 10:31:00.090 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:01 compute-0 openstack_network_exporter[204162]: ERROR   10:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:31:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:31:01 compute-0 openstack_network_exporter[204162]: ERROR   10:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:31:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:31:02 compute-0 nova_compute[185194]: 2026-01-31 10:31:02.648 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:05 compute-0 nova_compute[185194]: 2026-01-31 10:31:05.093 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:05 compute-0 nova_compute[185194]: 2026-01-31 10:31:05.690 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:05 compute-0 nova_compute[185194]: 2026-01-31 10:31:05.691 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 10:31:05 compute-0 nova_compute[185194]: 2026-01-31 10:31:05.691 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 10:31:06 compute-0 nova_compute[185194]: 2026-01-31 10:31:06.062 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 10:31:06 compute-0 nova_compute[185194]: 2026-01-31 10:31:06.062 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquired lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 10:31:06 compute-0 nova_compute[185194]: 2026-01-31 10:31:06.063 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 10:31:06 compute-0 nova_compute[185194]: 2026-01-31 10:31:06.064 185198 DEBUG nova.objects.instance [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c7f9a83-17b3-4e0f-8936-9e6a19920064 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 10:31:07 compute-0 nova_compute[185194]: 2026-01-31 10:31:07.650 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:09 compute-0 podman[252463]: 2026-01-31 10:31:09.005673797 +0000 UTC m=+0.119516077 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260126, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 10:31:09 compute-0 podman[252462]: 2026-01-31 10:31:09.028667723 +0000 UTC m=+0.142979445 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 10:31:10 compute-0 nova_compute[185194]: 2026-01-31 10:31:10.096 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.088 185198 DEBUG nova.network.neutron [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updating instance_info_cache with network_info: [{"id": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "address": "fa:16:3e:de:66:1f", "network": {"id": "42bce1b2-79a6-4f08-8713-7d1e88cff865", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1223767552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "904e48d9dedd4c41a51e9b18681b22c2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2cd8b2e-64", "ovs_interfaceid": "b2cd8b2e-6468-4312-90f5-af1ffe4ba3ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.106 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Releasing lock "refresh_cache-2c7f9a83-17b3-4e0f-8936-9e6a19920064" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.107 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] [instance: 2c7f9a83-17b3-4e0f-8936-9e6a19920064] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.108 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.108 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.109 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:11 compute-0 nova_compute[185194]: 2026-01-31 10:31:11.110 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:11 compute-0 podman[252507]: 2026-01-31 10:31:11.986686788 +0000 UTC m=+0.110723270 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 31 10:31:12 compute-0 nova_compute[185194]: 2026-01-31 10:31:12.605 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:12 compute-0 nova_compute[185194]: 2026-01-31 10:31:12.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:12 compute-0 nova_compute[185194]: 2026-01-31 10:31:12.607 185198 DEBUG nova.compute.manager [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 10:31:12 compute-0 nova_compute[185194]: 2026-01-31 10:31:12.652 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:13 compute-0 nova_compute[185194]: 2026-01-31 10:31:13.606 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:14 compute-0 podman[252531]: 2026-01-31 10:31:14.977718423 +0000 UTC m=+0.084986066 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 10:31:15 compute-0 podman[252530]: 2026-01-31 10:31:15.016626602 +0000 UTC m=+0.128915719 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.buildah.version=1.29.0, architecture=x86_64, vcs-type=git)
Jan 31 10:31:15 compute-0 nova_compute[185194]: 2026-01-31 10:31:15.099 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:15 compute-0 nova_compute[185194]: 2026-01-31 10:31:15.604 185198 DEBUG oslo_service.periodic_task [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.206 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.207 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.208 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.208 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 10:31:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:31:16.464 106883 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:31:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:31:16.472 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:31:16 compute-0 ovn_metadata_agent[106878]: 2026-01-31 10:31:16.474 106883 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.505 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.597 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.600 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.686 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a0208ed9-3ae5-4499-81e2-3f2cb621f74d/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.694 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.759 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.762 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 10:31:16 compute-0 nova_compute[185194]: 2026-01-31 10:31:16.839 185198 DEBUG oslo_concurrency.processutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c7f9a83-17b3-4e0f-8936-9e6a19920064/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.227 185198 WARNING nova.virt.libvirt.driver [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.229 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=72.30327606201172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.230 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.230 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 10:31:17 compute-0 ovn_controller[97627]: 2026-01-31T10:31:17Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:d0:f6 10.100.0.13
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.526 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance 2c7f9a83-17b3-4e0f-8936-9e6a19920064 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.526 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Instance a0208ed9-3ae5-4499-81e2-3f2cb621f74d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.526 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 10:31:17 compute-0 ovn_controller[97627]: 2026-01-31T10:31:17Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:d0:f6 10.100.0.13
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.527 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.656 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.773 185198 DEBUG nova.compute.provider_tree [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed in ProviderTree for provider: 1f8a458f-baaf-434f-841c-59d735622205 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.806 185198 DEBUG nova.scheduler.client.report [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Inventory has not changed for provider 1f8a458f-baaf-434f-841c-59d735622205 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.835 185198 DEBUG nova.compute.resource_tracker [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 10:31:17 compute-0 nova_compute[185194]: 2026-01-31 10:31:17.835 185198 DEBUG oslo_concurrency.lockutils [None req-60dba25e-dd3b-49f0-9913-e4dce7e19cd3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 10:31:18 compute-0 ovn_controller[97627]: 2026-01-31T10:31:18Z|00102|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Jan 31 10:31:20 compute-0 nova_compute[185194]: 2026-01-31 10:31:20.104 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:22 compute-0 nova_compute[185194]: 2026-01-31 10:31:22.658 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:25 compute-0 nova_compute[185194]: 2026-01-31 10:31:25.108 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:26 compute-0 podman[252589]: 2026-01-31 10:31:26.019119001 +0000 UTC m=+0.132201170 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:31:27 compute-0 nova_compute[185194]: 2026-01-31 10:31:27.663 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:29 compute-0 podman[201068]: time="2026-01-31T10:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:31:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29935 "" "Go-http-client/1.1"
Jan 31 10:31:29 compute-0 podman[201068]: @ - - [31/Jan/2026:10:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4843 "" "Go-http-client/1.1"
Jan 31 10:31:30 compute-0 nova_compute[185194]: 2026-01-31 10:31:30.113 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:30 compute-0 podman[252616]: 2026-01-31 10:31:30.997483088 +0000 UTC m=+0.106533237 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, org.opencontainers.image.created=2026-01-22T05:09:47Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-22T05:09:47Z, config_id=openstack_network_exporter, release=1769056855, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, architecture=x86_64, io.buildah.version=1.33.7)
Jan 31 10:31:31 compute-0 podman[252615]: 2026-01-31 10:31:31.014305153 +0000 UTC m=+0.128591071 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 10:31:31 compute-0 openstack_network_exporter[204162]: ERROR   10:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:31:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:31:31 compute-0 openstack_network_exporter[204162]: ERROR   10:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:31:31 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:31:32 compute-0 nova_compute[185194]: 2026-01-31 10:31:32.668 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:35 compute-0 nova_compute[185194]: 2026-01-31 10:31:35.116 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:37 compute-0 nova_compute[185194]: 2026-01-31 10:31:37.671 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:40 compute-0 podman[252652]: 2026-01-31 10:31:40.003950929 +0000 UTC m=+0.121903306 container health_status 57149fcd26641de92641df2ff5fd11ce04333ba456f8eef22aeb26a8cdf215ab (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:31:40 compute-0 podman[252653]: 2026-01-31 10:31:40.00481131 +0000 UTC m=+0.116352539 container health_status 5c99ec58de0e6739216de276c17af784548a97c5a67e24a47a98e8daa0e86dca (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=bdeee5c0fee659ba4a77c832d3a5caf6, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20260126, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 31 10:31:40 compute-0 nova_compute[185194]: 2026-01-31 10:31:40.120 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:41 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 10:31:42 compute-0 nova_compute[185194]: 2026-01-31 10:31:42.675 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:42 compute-0 podman[252699]: 2026-01-31 10:31:42.970093632 +0000 UTC m=+0.089567799 container health_status 041cbdd9a5d46c97d1d98085804641fc4aaf7ea08834b43901340e94c131099e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 31 10:31:45 compute-0 nova_compute[185194]: 2026-01-31 10:31:45.123 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:45 compute-0 podman[252724]: 2026-01-31 10:31:45.991810746 +0000 UTC m=+0.105409329 container health_status 81f014890c0ffd0f42520aad1af0e960d281ac399a23c5ab78e6a8e10477f882 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d-c285151327e8b16b6b31091680e8efea9c5f2b640b172cf3d9b6f81713d2fd8d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 10:31:45 compute-0 podman[252723]: 2026-01-31 10:31:45.994559304 +0000 UTC m=+0.119958488 container health_status 2a000ce4310b888dd11a2c567f408e59340465ba679dc3b61e3434e56d173f60 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 10:31:47 compute-0 nova_compute[185194]: 2026-01-31 10:31:47.677 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:50 compute-0 nova_compute[185194]: 2026-01-31 10:31:50.126 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:52 compute-0 nova_compute[185194]: 2026-01-31 10:31:52.679 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:53 compute-0 sshd-session[252763]: Accepted publickey for zuul from 192.168.122.10 port 52998 ssh2: ECDSA SHA256:yFL96yDWtbQArGnnZKaf/7sNdAfr/Pj60BbxvM62KkE
Jan 31 10:31:53 compute-0 systemd-logind[795]: New session 31 of user zuul.
Jan 31 10:31:53 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 31 10:31:53 compute-0 sshd-session[252763]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 10:31:53 compute-0 sudo[252767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 10:31:53 compute-0 sudo[252767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 10:31:55 compute-0 nova_compute[185194]: 2026-01-31 10:31:55.130 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:56 compute-0 podman[252909]: 2026-01-31 10:31:56.684533755 +0000 UTC m=+0.122020168 container health_status 7576696c02880e888dc81f2e417267fc9a5f0866bc82f4272cc3a8fb08cc4593 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 31 10:31:57 compute-0 nova_compute[185194]: 2026-01-31 10:31:57.681 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:31:59 compute-0 ovs-vsctl[252970]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 10:31:59 compute-0 podman[201068]: time="2026-01-31T10:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 31 10:31:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29935 "" "Go-http-client/1.1"
Jan 31 10:31:59 compute-0 podman[201068]: @ - - [31/Jan/2026:10:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4847 "" "Go-http-client/1.1"
Jan 31 10:31:59 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 252791 (sos)
Jan 31 10:31:59 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 10:31:59 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 10:32:00 compute-0 nova_compute[185194]: 2026-01-31 10:32:00.135 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:32:00 compute-0 virtqemud[184917]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 10:32:00 compute-0 virtqemud[184917]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 10:32:00 compute-0 virtqemud[184917]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 10:32:01 compute-0 podman[253232]: 2026-01-31 10:32:01.178720244 +0000 UTC m=+0.072464707 container health_status 1defc5578ee2dfdf5ae766534d52d4b5d7e849264a329446cbf92c39478ae480 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '9aa49389d78780d316051305bb7a66607295aabc3be45a32e1dc25360b3cb908-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 10:32:01 compute-0 podman[253235]: 2026-01-31 10:32:01.198315697 +0000 UTC m=+0.090149013 container health_status df40abd751c03ee25a372a639bfa38619b7f19bba405c22e06f05f2beb386044 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '551d1fa2aaa1d09e371b27be89721d1188b73ef34daa87d8889d0e0374356c99-4513b9ade86adc87d1a6c9416d7c3bf860314bfcf0b3a2bcdbd881f6906fc595'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2026-01-22T05:09:47Z, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=812a20485e9d8d728e95b468c2886da21352b9fc, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, org.opencontainers.image.revision=812a20485e9d8d728e95b468c2886da21352b9fc, version=9.7, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1769056855, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-01-22T05:09:47Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 31 10:32:01 compute-0 openstack_network_exporter[204162]: ERROR   10:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 31 10:32:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:32:01 compute-0 openstack_network_exporter[204162]: ERROR   10:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 31 10:32:01 compute-0 openstack_network_exporter[204162]: 
Jan 31 10:32:01 compute-0 crontab[253435]: (root) LIST (root)
Jan 31 10:32:02 compute-0 nova_compute[185194]: 2026-01-31 10:32:02.684 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:32:04 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 10:32:04 compute-0 systemd[1]: Started Hostname Service.
Jan 31 10:32:05 compute-0 nova_compute[185194]: 2026-01-31 10:32:05.138 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 10:32:07 compute-0 nova_compute[185194]: 2026-01-31 10:32:07.687 185198 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
